AI Risk Reduction Initiative 

Assessing the risks and opportunities of AI foundation models and developing technical and policy-oriented risk reduction strategies

As highly advanced artificial intelligence (AI) systems become increasingly integrated into critical aspects of society—from healthcare and finance to transportation and national security—policymakers and broader society are paying closer attention to the potential risks and opportunities associated with their development and deployment. With the support of the Patrick J. McGovern Foundation, the Institute for Security and Technology (IST) engages with a diverse range of stakeholders across the AI ecosystem to better understand the emerging risks of AI foundation models and to develop technical and policy oriented risk reduction strategies, driving forward responsible innovation. 

“Ambiguous AI safety definitions and the rapid pace of development challenge efforts to govern it and potentially even its adoption across regulated industries, while problems with interpretability hinder the development of compliance mechanisms, and AI agents blur the lines of liability in the automated world. As organizations face risks ranging from minor infractions to catastrophic failures that could ripple across sectors, the stakes for effective oversight grow higher. Without proper safeguards, we risk eroding public trust in AI and creating industry practices that favor speed over safety—ultimately affecting innovation and society far beyond the AI sector itself.”

Featured Content

Recent Content

AI Risk Reduction Initiative Team

Philip Reiner

Chief Executive Officer

Steven M. Kelly

Chief Trust Officer

Mariami Tkeshelashvili

Deputy Director for Artificial Intelligence Security Policy

Ritika Verma

Senior Analyst for Artificial Intelligence Security Policy