AI Risk Reduction Initiative

Assessing the risks and opportunities of increased access to AI foundation models and setting the stage for actionable, policy oriented solutions.

As highly advanced artificial intelligence (AI) systems become increasingly integrated into critical aspects of society—from healthcare and finance to transportation and national security—policymakers and broader society are paying closer attention to the potential risks and opportunities associated with their development and deployment. With the support of the Patrick J. McGovern Foundation, the Institute for Security and Technology (IST) engages with a diverse range of stakeholders across the AI ecosystem to better understand the emerging risks of AI foundation models and to develop technical and policy oriented risk reduction strategies, driving forward responsible innovation. 

Navigating AI Compliance, Part 2: Risk Mitigation Strategies for Safeguarding Against Future Failures
In this second report in a two-part series, authors Mariami Tkeshelashvili and Tiffany Saade present 39 risk mitigation strategies for avoiding institutional, procedural, and performance failures of AI systems co-created by a working group of experts. These strategies aim to enhance user trust in AI systems and maximize product utilization. AI builders and users, including AI labs, enterprises deploying AI systems, as well as state and local governments, can use and implement a selection of the 22 technical and 17 policy-oriented risk mitigation strategies presented in this report according to their needs and risk thresholds. 
March 2025 | Report

Navigating AI Compliance, Part 1: Tracing Failure Patterns in History
The first of a two-part series, this report examines 11 case studies from AI-adjacent industries to identify three distinct failure categories: institutional, procedural, and performance. By studying failures across sectors, authors Mariami Tkeshelashvili & Tiffany Saade are able to uncover critical lessons about risk assessment, safety protocols, and oversight mechanisms that can guide AI innovators in this era of rapid development. “To mitigate potential compliance failure risks in the AI ecosystem, the AI industry must proactively embrace rigorous testing, transparent reporting of capabilities, strong data governance, and collaboration with regulators to develop effective oversight mechanisms,” the authors write.
December 2024 | Report

A Lifecycle Approach to AI Risk Reduction: Tackling the Risk of Malicious Use Amid Implications of Openness
Building on IST’s December 2023 report, which evaluated six categories of AI risk across seven levels of model access, this report provides policymakers and regulators with a robust framework for addressing these complex risks. The report establishes five guiding principles that serve as the foundation for the proposed risk mitigation strategies: balancing innovation and risk aversion, fostering shared responsibility among stakeholders, maintaining a commitment to accuracy, developing practicable regulation, and creating adaptable and continuous oversight.
Report | June 2024

How Does Access Impact Risk? Assessing AI Foundation Model Risk Along a Gradient of Access
Advanced AI systems are proliferating at an astonishing rate, with varying levels of access to their model components. To date, there is no clear method for understanding the risks that can arise as access increases. Our latest report addresses this gap, developing a matrix to map categories of risk against a gradient of access to AI foundation models.
Report | December 2023

From the NatSpecs Blog

Managing Misuse Risk for Dual-Use Foundation Models: IST Submits Comments to a NIST Request for Information
Drawing on our multi-year research effort into AI risk reduction, IST submitted a response to NIST’s Request for Comments on its draft voluntary guidelines for improving the safety, security, and trustworthiness of dual-use foundation models. IST commended NIST’s emphasis on monitoring for and responding to misuse and recommended adding an additional element on monitoring and refining counter-misuse practices and insight on the ways that adhering to risk management best practices can generate a Return on Investment.
March 2025 | Blog

Patrick J. McGovern Foundation Renews Commitment to Supporting IST’s AI Risk Reduction Efforts
Over the last 2 years with the support of the Patrick J. McGovern Foundation, IST has worked to assess the risks & opportunities surrounding the proliferation of AI foundation models. IST is excited to announce renewed support from PJMF to further advance this vital work.
March 2025 | Blog

Decrypting Iran’s AI-Enhanced Operations in Cyberspace
Iran has long used unconventional tactics and weapons to wage asymmetric warfare. Now, amid recent uprisings, Iran has begun to incorporate AI into its domestic surveillance efforts. What does this mean for Tehran’s future cyberspace and information operations? Mariami Tkeshelashvili and Adjunct Cybersecurity and Tiffany Saade explore Iran’s use of AI in information operations and domestic surveillance and make predictions about what might come next.
September 2024 | Blog

IST, industry and civil society contributors release report assessing risks of increased access to AI foundation models
Contributors and IST staff who led the working group reflect on the findings of the report and preview what’s to come.
December 2023 | Blog

6 layer neural network

IST Statement on President Biden’s Artificial Intelligence Executive Order
IST commends the Biden-Harris Administration for this week releasing its Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. IST looks forward to playing a catalyzing role in helping identify gaps, shape solutions, and inform the intense work that still needs to be done. 
November 2023 | Statement

IST Announces Steve Kelly as First Chief Trust Officer

IST announces Steve Kelly as its first Chief Trust Officer
The Institute for Security and Technology announced today the addition of Steve Kelly as its first Chief Trust Officer. At IST, Steve will establish a new effort to advance the trust, safety, and security of artificial intelligence and help lead other aspects of the organization’s work.
August 2023 | Statement

6 layer neural network

Catalyzing Security in AI Governance
Conversations around AI and governance align with IST’s mission to harness opportunities enabled by emerging technologies while also mitigating their attendant risks. We believe this work will support digital sustainability, where all stakeholders—public and private—recognize their role in innovating and building a more secure and trustworthy digitally-enabled future.
June 2023 | Blog