announcements

Introducing the Trust & Safety Advisory Group

By Nile Johnson on March 14, 2024

IST’s Applied Trust & Safety Initiative is a unique, cross-collaborative commitment to tackling some of the industry’s most pressing issues. Both nimble and nascent, we rely on partnerships across the T&S field to drive forward our work. As we seek to make thoughtful, well-informed decisions that will position us to leave a lasting impact in this space, we’re excited to introduce IST’s Trust & Safety Advisory Group (TSAG). Composed of thought leaders and senior practitioners from across the industry, TSAG members inform the strategy and substance of the initiative’s work going forward. TSAG comprises key luminaries across the trust and safety, technology, government, and nonprofit spaces who have deep expertise in the field. IST strategically sought out this breadth of experience to support the range of issues we seek to tackle.

Below we provide information about our public-facing TSAG members and the issues they’d like to address head-on. Read on to learn more about this amazing group.

Jonathan Bellack

Jonathan is Senior Director of the Applied Social Media Lab within the Berkman Klein Center. He is building a team of technologists to invent new social media approaches that center the public interest. If this sounds like your kind of mission, please reach out!

Jonathan is a veteran Internet technology product manager. His 30 years of experience started with dot-com era startups in New York City, Silicon Valley, and the UK. In 2004 he joined the digital advertising technology firm DoubleClick, where he helped lead a turnaround culminating in their $3.1 billion acquisition by Google in 2008. He spent another decade leading product management for technology platforms that helped millions of small and large media companies worldwide build and grow their digital advertising business.

At the end of 2018, Jonathan changed his focus to making people safer online. He was the first-ever product management director at Jigsaw, a Google unit building experimental technology to combat digital harms including organized harassment, repressive censorship, and state-sponsored disinformation. He then served as senior director of product management for Counter-Abuse Technology and Identity, where he built a 50-person team that helped protect Google and YouTube users from malicious actors, untrustworthy content, and compromised accounts.

Jonathan is a husband and father to two sons, with a boundless enthusiasm for grilling and cooking, his new labradoodle, and all flavors of nerdery from comics to video games to Dungeons & Dragons and beyond. He serves on the boards of Montclair Local Nonprofit News and New Jersey 11th for Change. Jonathan received a B.A. from Yale University and an M.B.A. from the NYU Stern School of Business. 

You can learn more about him here.

Olga Belogolova

Olga Belogolova is the Director of the Emerging Technologies Initiative at the Johns Hopkins School of Advanced International Studies (SAIS). She is also a professor at the Alperovitch Institute for Cybersecurity Studies at SAIS, where she teaches a course on disinformation and influence in the digital age.

At Facebook/Meta, she led policy for countering influence operations, leading the execution and development of policies on coordinated inauthentic behavior, state media capture, and hack-and-leaks within the Trust and Safety team. Prior to that, she led threat intelligence work on Russia and Eastern Europe at Facebook, identifying, tracking, and disrupting coordinated IO campaigns, and in particular, the Internet Research Agency investigations between 2017 and 2019.

Olga previously worked as a journalist and her work has appeared in The Atlantic, NationalJournal, Inside Defense, and The Globe and Mail, among others. She is a fellow with the Truman National Security Project, serves on the review board for CYBERWARCON, and is on the board of directors for the Digital Democracy Institute of the Americas (DDIA).

“I am focused on bridging the gap between technology, trust and safety, and policy-making. As a professor, I’m passionate about educating the next generation of policy students on emerging technologies and trust and safety practices, and likewise ensuring technologists understand the policy implications of their work. By integrating these perspectives, we can begin to narrow the divide and foster more effective collaboration, driving more relevant policymaking.”

Jeff Dunn

Jeff Dunn is a Trust & Safety leader with 10+ years experience protecting users, brands, and platforms. In his current role, Jeff is Vice President of Trust & Safety at Hinge where he is responsible for all aspects of safety, compliance, customer support, policy, and more. He co-founded the Trust & Safety Hackathon which is dedicated to dreaming up and building solutions that help platforms work together to protect users.

Previously, Jeff worked at Google for over 7 years where he held numerous leadership positions in Trust & Safety. As part of this work, he worked on protecting elections, T&S operations, educating users, press, partners, and elected officials, protecting children, and more.

“In 2024, Trust & Safety teams are being asked to do a lot more with a lot less. Each team and its respective company must contend with the spread of mis / disinformation, elections, AI-generated harm, child safety, pig butchering scams, and dozens of other issues. Meanwhile, these teams are facing layoffs and budgeting cutbacks which leave companies and users vulnerable. That said, I am cautiously optimistic this confluence of events will lead to platforms working better together. We have no other choice.”

Rachel Gillum

Dr. Rachel Gillum is Vice President of Ethical & Humane Use of Technology at Salesforce, leading the team responsible for developing and implementing AI and responsible product use policies across Salesforce’s suite of offerings. Rachel was appointed as a Commissioner on the bipartisan AI Commission on Competition, Inclusion, and Innovation by the U.S. Chamber of Commerce, tasked with developing AI policy solutions for the United States in collaboration with leaders from government, industry, and civil society. Rachel is an affiliated scholar at Stanford University and author of several academic works. She is a nonresident fellow of the Atlantic Council’s Digital Forensic Research Lab and a term member with the Council on Foreign Relations.

Rachel previously worked alongside former Secretary of State Condoleezza Rice, former National Security Advisor Stephen Hadley and former Secretary of Defense Robert Gates at the strategic consulting firm RiceHadleyGatesManuel LLC, where she led the firm’s portfolio of technology and venture capital companies as Senior Director of the Silicon Valley Office. Her global perspective is shaped by her tenure in intelligence and security roles within the U.S. government. Rachel received her PhD and MA degrees from Stanford University.

“After observing and addressing the initial risks of generative AI over the past several months, with continued rapid innovation I anticipate we’ll continue to discover new challenges that will require novel mitigations and guardrails. I’m eager to dive into this work while staying grounded in ensuring these tools are developed and deployed in a rights-respecting manner with uncompromising standards of trust and safety.”

Inbal Goldberger

Inbal Goldberger serves as the VP of Trust & Safety at ActiveFence, where she engages with the wider Trust & Safety community and spearheads cross-industry initiatives. With a background as a military intelligence officer, specializing in counter-terrorism, Inbal previously contributed to the field of cybersecurity before embarking on her journey with Google Trust & Safety. During her tenure at Google, her primary focus revolved around preventing user harm on Google Search through the management of global, cross-functional teams spanning intelligence, enforcement, and incident management.

Inbal is a member of the World Economic Forum’s Global Coalition for Digital Safety, holds a board position at Marketplace Risk, serves as a core leader of the Atlantic Council’s Task Force for a Trustworthy Future Web, and participates as a steering committee member of the Trust & Safety Forum.

Inbal led the establishment of the Trust & Safety Academy in collaboration with New York University.

Inbal’s academic background includes a Bachelor of Arts degree in Linguistics and an MBA from Tel Aviv University.

“As we enter 2024, Trust & Safety faces new challenges and opportunities with over 60 global elections and the rise of Generative AI. These technologies present risks, like spreading misinformation, but also offer solutions for enhancing T&S efforts. A key focus is GenAI’s impact on intellectual property, questioning ‘fair use’ and ‘copyright infringement’ in the AI era. Online regulation such as EU’s Digital Services Act and UK’s Online Safety Act will test platforms’ compliance. Ensuring the protection and care of young users will continue to be a top priority for Trust and Safety.” 

Yoel Roth

Yoel Roth is a trust and safety practitioner and researcher, and most recently joined Match Group as the Vice President for Trust & Safety. He is concurrently a Knight Visiting Scholar at the University of Pennsylvania, a Technology Policy Fellow at UC Berkeley, and a Non-Resident Scholar at the Carnegie Endowment for International Peace. His research, teaching, and writing focus on trustworthy governance approaches for social media, AI, and other emerging technologies.

Previously, he was the Head of Trust & Safety at Twitter. For more than 7 years, he helped build and lead the teams responsible for Twitter’s content moderation, integrity, and election security efforts.

Before joining Twitter, Yoel received his PhD from the Annenberg School for Communication at the University of Pennsylvania. His research examined the technical, policy, business, and cultural dynamics of social networking and online dating at the dawn of the “App Store” age.

“For the first time in nearly 15 years, the landscape of social media is shifting: We’re seeing new platforms emerge, including decentralized and federated services, as alternatives to dominant sites. But, too often, these services have all the challenges and risks of large platforms, with few of their resources and relationships. Addressing the trust and safety needs of emerging platforms is critical, both to keep people safe and to promote a diverse and competitive open internet, and I’m excited to work with IST towards scalable solutions for the whole T&S ecosystem.

Derek Slater

Derek Slater is a tech policy strategist focused on media, communications, and information policy. He is a Founding Partner at Proteus Strategies, a tech policy consulting firm. Previously, he helped build Google’s public policy team from 2007-2022, serving as the Global Director of Information Policy during the last three years. He led a global team of subject matter experts on access to information, content regulation, and online safety, and testified before legislators in the US, UK, and elsewhere around the globe.

“One topic I’ll be focused on is the responsible design of generative AI, particularly open models.”

David Sullivan

David Sullivan is the founding Executive Director of the Digital Trust & Safety Partnership, a unique initiative developing best practices for trust and safety in the use of digital services. An experienced human rights and technology policy practitioner, he brings together unlikely allies to solve global challenges related to rights, security, and democracy in the digital age.

Most recently, David served as Program Director at the Global Network Initiative (GNI), a collaboration between leading technology companies and human rights groups to protect and advance freedom of expression and privacy rights. During a decade at GNI he played a key role growing and globalizing the initiative’s membership, implementing its unique assessment process, and advocating for rights-based approaches on issues such as terrorist and extremist content and Internet shutdowns.

David previously led the research and policy team at the Enough Project at the Center for American Progress, where he oversaw field research and policy analysis on mass atrocity prevention in Africa. Earlier in his career he worked for international NGOs providing election assistance to Pakistan and humanitarian assistance in West and Central Africa.

He has published extensively on technology, security, and human rights, with commentary appearing in Slate, Lawfare, and Just Security.

He is a Humanity in Action Senior Fellow and serves on the Advisory Board of the Silicon Flatirons research center at the University of Colorado Law School. He holds a B.A. from Amherst College and an M.A. from the Johns Hopkins University Paul H. Nitze School of Advanced International Studies.

2024 can be a watershed year for encouraging common approaches to complex trust and safety challenges that defy one-size-fits-all solutions. Industry, government, and civil society stakeholders will all benefit from working toward international standards that align trust and safety teams around the practices they use to manage diverse digital safety risks.”

Dave Willner

Dave Willner started his career at Facebook helping users reset their passwords in 2008. He went on to join the company’s original team of moderators, write Facebook’s first systematic content policies, and build the team that maintains those rules to this day. After leaving Facebook in 2013, he consulted for several start ups before joining Airbnb in 2015 to build the Community Policy team. While there he also took on responsibility for the Quality and Training for the Trust team.  After leaving Airbnb in 2021, he began working with OpenAI, first as a consultant and then as the company’s first Head of Trust and Safety.  He is now a Fellow at the Stanford Program on the Governance of Emerging Technologies and consults for startups.

“I’m very optimistic about the potential of using large language models to do content moderation more quickly and accurately while reducing negative impacts on human moderators.  I think we’re on the cusp of the most significant change in moderation practice in a decade.”