Applied Trust & Safety Initiative

Addressing the human element of technological change

As technology continues to advance in its capabilities and uses—and plays an increasing role in our lives–we must remain vigilant to the human risks that accompany it. As the field of Trust & Safety continues to mature, organizations offering technology products and services must anticipate how they might be misappropriated, abused, or serve as a vector to target their users. Anticipating and actively managing these risks can build trust, safety, and confidence in a product and its provider, while simultaneously addressing risks with societal implications.

The Institute for Security and Technology (IST) sits at the intersection of rapid technological innovation and its implications for humanity, positioning it to respond to this challenge. IST in 2023 launched its Applied Trust & Safety Initiative, a long-term effort to ensure technology products and services are safe to use and capabilities such as AI are fully leveraged to address these challenges at scale.

“As technology products and services continue to evolve and increasingly become an indispensable part of our lives, so too must T&S practices grow and mature to manage the risks.”
Steve Kelly and Nile Johnson, Addressing the Human Element of Technological Change

Featured Content

2024 Elections and AI Case Studies: Beware the Six-Fingered Man
Just as the character Inigo Montoya in The Princess Bride searched for the six-fingered man to avenge his father, deepfake images used to be easily distinguishable by the presence, often, of an extra finger. But as over half of the world’s population prepares to head to the polls in 2024, AI deepfake images are becoming harder to identify. At TrustCon on July 24, IST Senior VP for Special Projects Eric Davis moderates a panel to discuss AI’s impact on 2024 elections so far with panelists Diane Chang, Alexis Crews, and Swapneel Mehta. In our latest blog, Eric and his fellow panelists preview their talk.
July 2024 | NatSpecs Blog

IST Hosts Trust & Safety Roundtable at AI Expo
IST on May 6 drew a standing-room-only audience for a roundtable discussion on building trust and safety into AI-enabled consumer products and services during a special event held at the inaugural AI Expo for National Competitiveness in the nation’s capital.
May 2024 | NatSpecs Blog

Q&A: Hannah Ajakaiye on manipulated media in the 2023 Nigerian presidential elections, generative AI, and possible interventions
In early 2023, voters in Nigeria’s presidential election were inundated with election disinformation. AI-generated deep fakes, as well as paid posts falsely linking candidates to militant or separatist groups, filled social media platforms. IST’s Vice President for Special Projects Eric Davis interviewed Hannah Ajakaiye, a journalist who spearheaded local efforts to fight the proliferation of misinformation with FactsMatterNG, a fact-checking initiative she founded to restore information integrity on digital platforms.
March 2024 | NatSpecs Blog

Introducing the Trust and Safety Advisory Group
Composed of key luminaries from across the trust and safety, technology, government, and nonprofit spaces, the Trust and Safety Advisory Group will leverage their breadth of experience to help inform the strategy and substance of the Initiative’s work.
March 2024 | Announcement

IST launches Generative Identity Initiative with support of Omidyar Network
With the generous support of Omidyar Network, the Institute for Security and Technology (IST) announced the launch of the Generative Identity Initiative, a new effort to address the complex questions around generative AI’s impact on social identities, norms, and belonging. 
January 2024 | Announcement

Facial recognition technology to monitor the population on busy street

Addressing the Human Element of Technological Change
IST in 2023 launched its Applied Trust & Safety Initiative, a long-term effort to ensure technology products and services are safe to use and technology solutions such as AI are fully leveraged to address these challenges at scale. Our efforts will primarily serve the T&S practitioner community in carrying out their challenging roles, but will also provide resources to innovators, guidance to users, and recommendations to policymakers.
December 2023 | Blog

Nile Johnson Joins IST to Lead Applied Trust and Safety Practice
The Institute for Security and Technology announced in September 2023 the addition of Nile Johnson as the Senior Director for Applied Trust and Safety. Nile will lead IST’s efforts through a combination of strategy, stakeholder engagement, and execution. 
September 2023 | Statement

IST Announces Steve Kelly as First Chief Trust Officer

IST announces Steve Kelly as its first Chief Trust Officer
The Institute for Security and Technology announced in August 2023 the addition of Steve Kelly as its first Chief Trust Officer. At IST, Steve will establish a new effort to advance the trust, safety, and security of artificial intelligence and help lead other aspects of the organization’s work.
August 2023 | Statement

IST advances Applied Trust and Safety work in partnership with the Patrick J. McGovern Foundation
As IST scaled up its work in the field of Applied Trust and Safety, we announced in December 2022 our partnership with the Patrick J. McGovern Foundation (PJMF), a global, 21st century philanthropy focused on bridging the frontiers of artificial intelligence, data science, and social impact.
December 2022 | Announcement

Applied Trust & Safety Projects

Trust & Safety in Cloud Services

U.S. regulators in early 2024 proposed a rule that would require increased due diligence efforts by IaaS providers of their foreign customers to prevent and address abuse. As an alternative to its know-your-customer requirement, the rule would invite proposals for alternative countermeasures including a “consortium” approach among providers and potentially relevant government agencies. To help advance the development of creative alternatives, the Institute for Security and Technology (IST) in partnership with the Cyber Threat Alliance (CTA) is studying this topic with an aim to provide recommendations for how a consortium could be shaped to best accomplish the government’s overall objective of deterring abuse.