Applied Trust and Safety Initiative

Addressing the Human Element of Technological Change

By Steve KellyNile Johnson on December 21, 2023

A Russia-based troll farm created and used fictitious personas on U.S. social media platforms to sow discord in the American political system during the 2016 Presidential election, according to a federal grand jury indictment. Malicious cyber actors were recently observed co-opting cloud services to enable ransomware attacks and state-sponsored hacking operations, constituting yet another example of Infrastructure-as-a-Service platform abuse called out in a 2021 Executive Order. Four individuals were recently indicted for their roles in laundering more than $80 million stolen through a “pig butchering” scheme, which commonly targets victims through dating services, social media, or unsolicited messages or calls to pitch fraudulent investment opportunities.

Shared amongst these stories is a common theme: each illustrates how technology can be abused, to the detriment of the service providers, their customers, and society at large.

The Role of Trust & Safety Programs

As technology continues to advance in its capabilities and uses–and plays an increasing role in our lives–we must remain vigilant to the human risks that accompany it. While social media, ubiquitous mobile connectivity, and virtualization have already made an indelible mark, their benefits and downsides will be significantly magnified once artificial intelligence (AI) capabilities are fully incorporated. These are precisely the types of challenges for which technology companies began establishing trust and safety (hereafter, “T&S”) programs in the late 1990s. As this area continues to mature, organizations offering technology products and services must anticipate how they might be misappropriated, abused, or serve as a vector to target their users. Anticipating and actively managing these risks can build trust, safety, and confidence in a product and its provider, while simultaneously addressing risks involving societal implications.

IST’s Applied Trust & Safety Initiative

The Institute for Security and Technology (IST) sits at the intersection of rapid technological innovation and its implications for humanity, while serving as a bridge between technologists and policymakers. Since our founding in 2015, IST has addressed challenging topics such as the use of AI in nuclear command and control, mesh-network crisis communications, U.S.-China technology competition, combating ransomware and cryptocurrency money laundering, open source software security, and the implications of technology on human cognition. Building on this track record, and with the generous support of the Patrick J. McGovern Foundation, IST this year launched its Applied Trust & Safety Initiative, a long-term effort to ensure technology products and services are safe to use and technology solutions such as AI are fully leveraged to address these challenges at scale. Elements of this year’s foundational work included:

  • Investing in the future. IST hired us this year as Chief Trust Officer and Senior Director for Applied Trust & Safety to build and grow the organization’s trust practice. We are supported by the recent addition of industry veteran Eric Davis as Senior Vice President for Special Projects.
  • Engaging the industry. IST participated in several trust and safety conferences this year, including the Trust & Safety Research Conference at Stanford University and the Marketplace Risk Global Summit in London, and completed a broad listening tour to better understand the challenges confronted by trust and safety practitioners and to inform our future work.
  • Charting the course. IST sought and gained participation from a number of trust and safety industry luminaries to serve on a standing advisory group to inform and steer our work going forward. Members of the Trust and Safety Advisory Group (TSAG) will be named in a forthcoming announcement.

Assessing the Landscape

Participants in our listening tour described above included practitioners, researchers, and academics across large technology companies, startups, civil society organizations, nonprofits, and universities. From these discussions, IST identified six key themes:

Adapting approaches in light of AI

AI is a game-changer for both consumer-facing technology platforms and traditional services such as banking and education. AI-generated content on social media and AI-supported business processes such as loan approvals will complicate practices such as fraud detection, content moderation, privacy protection, and non-discrimination. On the other hand, technology providers can incorporate AI into a range of useful services and functions, such as enhanced search, customer service, troubleshooting, document workflow, and psychologically taxing content moderation (e.g., obscenity and abuse images). As in traditional approaches, policies and practices must be updated to account for AI’s entrée into these areas.

Addressing the crisis of legitimacy

Economic headwinds have directly impacted how T&S functions are prioritized within organizations, with 2023 seeing layoffs and reorganizations across the technology sector. At the same time, political scrutiny of content moderation practices has fueled legitimacy concerns. Amid this climate, it is all the more important that organizations’ T&S functions be sufficiently resourced and that practices be rigorous, standardized, and transparent; respectful of diverse viewpoints; and independent of the ebb and flow of political landscapes.

Closing the gap between T&S practice and public policymaking

The technology sector has attracted a number of former government officials to lead T&S functions, but the converse has been rare. Healthy collaboration and information exchange between the public and private sectors within this practice area would be a welcome addition, as public policymaking would be better informed by those facing common T&S challenges on the ground, while technology providers would be more attuned to the public policy relevance of their work. In addition to making such strategic hires, the community might consider other innovative approaches, such as fellowships, details, and job swapping.

Adopting a global lens for T&S issues

While many of the world’s most widely used technology service platforms are based in the United States–and their approaches are therefore rooted in western law, policy, language, and customs–their customer bases are truly global. For example, AI-enabled services built on large language models (LLMs) not trained on a particular language, or for which the training data are not sufficiently representative of a culture, may not function as desired for that population. Technology providers must account for these factors across their business functions, from product design to T&S.

Tackling cross-platform spreading

There are a range of threat activities and violative behaviors that can impact multiple technology service platforms simultaneously, or that might spread pervasively or transition from one to another more quickly than they can be detected and addressed. This can include the fraudulent use of services, the spread of incendiary mis- and disinformation (e.g., on the eve of an election), or trafficking of illicit content. Providers and the broader ecosystem would benefit from standardized and collaborative approaches to addressing these shared challenges.

Promulgating frameworks and resources

Established technology sector organizations are inevitably duplicating some efforts by independently solving very similar T&S challenges; meanwhile, these issues are often an afterthought for new market entrants lacking T&S programs. The technology community, from incumbents to startups, would be better-positioned to protect users and guard against broader societal risks through the use of frameworks and resources built around common use cases in a structured, supported, and sustainable manner. As a result, standardization and transparency will build confidence among customers and other stakeholders.

Looking Forward

As technology products and services continue to evolve and increasingly become an indispensable part of our lives, so too must T&S practices grow and mature to manage the risks. The themes described above provide fertile ground for us to support the T&S community with new projects and initiatives in 2024. We are actively building out this effort and welcome you on our journey.