Experts from across the technology ecosystem weighed in on Trust & Safety (T&S) opportunities, risks, and mitigation strategies in AI-enabled consumer products and services.
May 17, 2024 — IST last week drew a standing-room-only audience for a roundtable discussion on building trust and safety into AI-enabled consumer products and services during a special event held at the inaugural AI Expo for National Competitiveness in the nation’s capital. A dozen participants representing incumbent technology platforms, household consumer brands, venture capital, industry collaboratives, start-ups, civil society groups, and academia discussed ways that AI is optimizing and revolutionizing a broad range of traditional consumer-facing products and services, the resulting risks, mitigation approaches, and governance.
Facilitated by IST Chief Trust Officer Steve Kelly, the conversation first addressed exciting and beneficial AI use cases, including home kitchen automation, customer service and help desk functions, personalized advertising and news curation, entertainment widgets, writing and content generation, and even automated skin disease detection. The roundtable also drew attention to the real-world harms that AI can inflict, such as a lack of algorithmic and data transparency leading to bias in credit, insurance underwriting, and healthcare; generation of non-consensual pornography which can lead to extortion and other harms; financial frauds enabled by deepfake voice, image, and video content; and misinformation.
Participants then turned to available options for risk mitigation and efforts to standardize them, discussing efforts like the U.S. AI Safety Institute being established within the National Institute for Standards and Technology and voluntary industry standards. Next, the discussion shifted to governance of AI risk, including within individual corporations, industry alliances, the investment community, consumer protection regulators, and even international bodies.
At the conclusion of the roundtable, Steve Kelly initiated a lightning round, asking each panelist to reflect on the most important steps the T&S community should take next.
Philip Dawson, Head of AI Policy at Armilla AI, pointed to the establishment of the U.S. Artificial Intelligence Safety Institute and the U.S. Artificial Intelligence Safety Institute Consortium as opportunities to “involve stakeholders from across the spectrum” in getting into the granular details of evaluations, metrics, and assessment frameworks from a more production and user-centric perspective. “It’s very promising,” he concluded. “I think that’s a great place to start.”
Rehan Ehsan, Senior Manager of Public Policy at Samsung Electronics, observed that some consensus has been established around voluntary industry safety standards. Next, the community needs to “standardize, harmonize, internationally collaborate, and create consensus standards,” leveraging the convening power of NIST to mediate the process and share best practices, to better inform policy decisions.
Ani Gevorkian, Director, Responsible AI Public Policy at Microsoft, called for a “focus on identifying and sharing best practices, and using those best practices to inform regulations as they unfold” across the ecosystem, including within industry and across consumer groups and government.
For Paladin Capital Group Venture Partner and Strategic Advisor Jamil Jaffer, the question becomes, “who’s going to take responsibility?” He described what’s at stake: “We’re in a battle for what it means to have trust, safety, and security in this ecosystem.” He called on everyone in the ecosystem, including companies, individuals, investors, and government, to take responsibility. “If we don’t defend [the values of a free, open society] in the way we invest, in the way our companies build, in the way our government regulates or doesn’t regulate, we’re not going to succeed.”
Google Director of T&S Partnerships and Research Angela McKay concluded her roundtable remarks with an ask to draw insights and domain knowledge from other spaces—including the insurance market, efforts to create standards, and conceptions of safety and security as a public good—to inform trust and safety in the AI domain. “None of these will apply perfectly, but it is important. The technology is moving really fast, and we don’t have time to relearn,” she said.
IST CEO Philip Reiner drew comparisons to the cybersecurity space in his concluding remarks, noting that much of the work to secure the digital ecosystem has been “playing clean up, because what was originally built in the Internet was so fundamentally flawed.” In the case of AI, time is of the essence: “We don’t have the time with AI to iterate incessantly for the next 10 years. We have to do this now.”
Var Shankar, Executive Director of the Responsible AI Institute, suggested that with a first set of AI laws and regulations in place in the EU and United States, it may be a good time to take stock of whether existing approaches to AI regulation appropriately address risks and roles and responsibilities within the ecosystem. Often, he said, “we focus on individual companies and individual use cases. We don’t think a lot about how those use cases will interact with each other. I would like to know that there’s somebody in the ecosystem that’s thinking about that.” Further, he added that while AI laws often assume good faith, AI use may not always occur in good faith, whether by people or governments. He proposed that the community consider “whether, at a very high level, we’re working on the right problems, given the potential risks.”
Omidyar Network Director for Responsible Technology Govind Shivkumar set out two priorities. First, petition for funding for the NIST AI Safety Institute. Second, set up a trust and safety innovation unit. “Fund it with $100 million, federal funding, and ask the private sector to fund $100 million,” he suggested.
Belle Torek, Senior Fellow for AI & Tech Advocacy at the Human Rights Campaign, emphasized the importance of sustainability, noting that the T&S professionals at the helm of these efforts to guard against AI risks are being impacted by layoffs and cuts. “We need to figure out a sustainable business mechanism by which we can keep trust and safety alive…and make it more communicative and horizontal such that there is an interoperable set of skills that people can take from one organization to another as we seek to streamline the process,” she said.
Responsible Innovation Labs Senior Advisor Lauren Wagner reflected on her experience in T&S at a social media company, noting that she was “struck by the fact that there were individuals making content policy decisions for billions of people that were incredibly impactful, and there was not much transparency into how these decisions were made and why.” She called for a counterweight to the concentrated power and explained, “part of that is developing a vibrant innovation ecosystem and ensuring that folks with technical sophistication are involved in these wider conversations.”
Stanford Program on Governance of Emerging Technologies Fellow Dave Willner said that the community should focus on funding emerging capabilities to “do safety better.” He explained, “it is the only way we will end up with tooling that is fast enough to deal with the pace of change that these systems are unleashing.” He also called attention to the Trust & Safety Professionals Association, which is working to “create a professional body that is horizontal across companies that allows people to take their knowledge with them.”
Thank you to all roundtable participants for joining us on this panel, as well as to the Schmidt Special Competitive Studies Project (SCSP) for hosting the AI Expo.