AI Foundation Model Access Initiative 

Assessing the risks and opportunities of increased access to AI foundation models and setting the stage for actionable, policy oriented solutions.

In the past several years, a number of leading labs have released cutting-edge AI foundation models. While some models have remained highly restricted, others are fully open, giving rise to a spirited debate on the relative risks and opportunities of these distinct business approaches. This discourse tends to borrow from historical debates around the open source software movement, which is now widely understood to be a critical source of technological innovation and cornerstone of digital development. But can we draw an equivalency between access to AI models and the open source software approach?

As part of its mission to address complex security issues at the forefront of technological innovation, the Institute for Security and Technology (IST) is leading an effort to study ways in which increased access to cutting-edge AI foundation models–across a gradient of access from fully closed to fully open–drives risk and enables opportunity. 

“In the most general terms, the resulting risk matrix indicates that as access to AI foundation models increases, there is new potential for harm… The risk of malicious use, compliance failure, taking the human out of the loop, and capability overhang all increase with increased access. The risk of fueling a race to the bottom increases when we assume a “winner takes all” dynamic. Only the risk of reinforcing bias fluctuates as access increases.” 
Zoë Brammer, How Does Access Impact Risk? December 2023

Latest

Report: How Does Access Impact Risk?
Assessing AI Foundation Model Risk Along A Gradient of Access | December 2023

Advanced AI systems are proliferating at an astonishing rate, with varying levels of access to their model components. To date, there is no clear method for understanding the risks that can arise as access increases. Our latest report addresses this gap.

Report: A Lifecycle Approach to AI Risk Reduction

Tackling the Risk of Malicious Use Amid Implications of Openness | June 2024

Building on IST’s December 2023 report, How Does Access Impact Risk? Assessing AI Foundation Model Risk Along a Gradient of Access, this report provides policymakers and regulators with a robust framework for addressing these complex risks.

Report: Navigating AI Compliance, Part 1

Tracing Failure Patterns in History

The first of a two-part series, this report examines 11 case studies from AI-adjacent industries to identify three distinct failure categories: institutional, procedural, and performance.

From the NatSpecs Blog

A Lifecycle Approach to AI Risk Reduction: Tackling the Risk of Malicious Use Amid Implications of Openness
The rapid advancement and proliferation of artificial intelligence (AI) technologies has brought forth myriad opportunities and challenges, necessitating the development of comprehensive risk mitigation strategies. Building on IST’s December 2023 report, How Does Access Impact Risk? Assessing AI Foundation Model Risk Along a Gradient of Access—which evaluated six categories of AI risk across seven levels of model access—this report provides policymakers and regulators with a robust framework for addressing these complex risks.

June 2024 | Report

IST, industry and civil society contributors release report assessing risks of increased access to AI foundation models
Contributors and IST staff who led the working group reflect on the findings of the report and preview what’s to come.

December 2023 | Blog

6 layer neural network

IST Statement on President Biden’s Artificial Intelligence Executive Order
IST commends the Biden-Harris Administration for this week releasing its Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. IST looks forward to playing a catalyzing role in helping identify gaps, shape solutions, and inform the intense work that still needs to be done. 

November 2023 | Statement

IST Announces Steve Kelly as First Chief Trust Officer

IST announces Steve Kelly as its first Chief Trust Officer
The Institute for Security and Technology announced today the addition of Steve Kelly as its first Chief Trust Officer. At IST, Steve will establish a new effort to advance the trust, safety, and security of artificial intelligence and help lead other aspects of the organization’s work.

August 2023 | Statement

6 layer neural network

Catalyzing Security in AI Governance
Conversations around AI and governance align with IST’s mission to harness opportunities enabled by emerging technologies while also mitigating their attendant risks. We believe this work will support digital sustainability, where all stakeholders—public and private—recognize their role in innovating and building a more secure and trustworthy digitally-enabled future.

June 2023 | NatSpecs Blog