AI Risk Busters: A lifecycle approach to AI risk reduction

Start: June 17, 2024 | 2:20 pm ET / 11:20 am PT
End: June 17, 2024 | 2:30 pm ET / 11:30 am PT

What constitutes the ‘malicious use’ of AI? IST hosted a virtual webinar to celebrate the launch of a new report, "A Lifecycle Approach to AI Risk Reduction: Tackling the Risk of Malicious Use Amid Implications of Openness."

The rapid advancement and proliferation of AI technologies has brought forth myriad opportunities and challenges, leading to a flurry of regulatory activities in the United States, United Kingdom, European Union, and beyond. But what risks, precisely, are they correcting for? And what mitigation strategies might be effective?

The Institute for Security and Technology (IST) in December sought to answer the first question in its seminal report entitled, How Does Access Impact Risk? Assessing AI Foundation Model Risk Along a Gradient of Access. In its latest report, entitled A Lifecycle Approach to AI Risk Reduction: Tackling the Risk of Malicious Use Amid Implications of Openness, IST and its multi-stakeholder working group takes on one of the six identified risk categories—malicious use—and proposes mitigation strategies mapped to the appropriate phases of the AI lifecycle.

On Monday, June 17, IST Senior Associate for AI Security Policy Mariami Tkeshelashvili moderated a conversation with panelists IST AI Policy Associate and report author Louie Kangeter, Brookings Instition fellow Valerie Wirtschafter, Executive Director of the Stanford Center for AI Safety Duncan Eddy, and Osavul CMO Andrew Bevz. IST Chief Trust Officer Steve Kelly kicked off the webinar with an introduction to the AI Foundation Model Access Initiative and IST’s latest report.

Panelists

Event Type

Topics

Share

Facebook
Twitter
LinkedIn
Print