Artificial Intelligence

IST, industry and civil society contributors release report assessing risks of increased access to AI foundation models 

December 13, 2023 – Today, the Institute for Security and Technology (IST), joined by contributors from academia, industry, and civil society, published the report How Does Access Impact Risk? Assessing AI Foundation Model Risk Along a Gradient of Access

In recent months, a number of leading AI labs have released advanced artificial intelligence systems. While some models remain highly restricted, limiting who can access the model and its components, others provide fully open access to their model weights and architecture. The potential benefits of these more open postures are generally well understood. As a result, this effort turned our attention to the risks, seeking to answer the question: how does access to foundation models and their components impact the risk they pose to individuals, groups, and society? 

There is currently no clear mechanism for understanding the risks that may arise as models are opened to greater levels of access, not least because, in light of recent industry developments, it is clear that the “open” versus “closed” framing does not sufficiently capture the full spectrum of access in the AI ecosystem. To fill this gap, IST’s latest report examines the risks and opportunities created by increased access to these foundation models. Its publication was made possible through the generous support of the Patrick J. McGovern Foundation. 

How Does Access Impact Risk? Assessing AI Foundation Model Risk Along a Gradient of Access is the result of six months of IST staff research; expert interviews; a survey sent to representatives of leading AI labs, think tanks, and academic institutions; and a series of closed-door working group meetings with AI developers, researchers, and practitioners. It draws on contributions and peer review from Anthony Aguirre, Markus Anderljung, Chloe Autio, Ramsay Brown, Chris Byrd, Gaia Dempsey, David Evan Harris, Vaishnavi J., Landon Klein, Sébastien Krier, Jeffrey Ladish, Nathan Lambert, Aviv Ovadya, Elizabeth Seger, and Deger Turan, as well as many more contributors and members of the working group who could not be openly named.  

“Understanding the opportunities and risks posed by cutting-edge artificial intelligence is a complicated task, not least because the technology is evolving so rapidly. I am immensely grateful to our working group for helping us sort through the noise to identify specific ways in which cutting-edge AI might enable benefits and/or cause harm,” said author Zoë Brammer. “In our early working group meetings, we came to understand that the colloquial framing of this problem set (risk posed by open vs. closed models) is not representative of the technical reality in the ecosystem. Our research, combined with critical insights from working group members, enabled us to build a nuanced understanding of how access to models is facilitated technically, and how that gradient of access interacts with a range of risks.”

A Sneak Peek: What’s in the report? 

In its analysis, the report outlines six distinct categories of risk as identified by the working group posed by the development and proliferation of AI technologies. Understanding these risks and how they will evolve, working group member Jeffrey Ladish explained, can be a challenge. “We still know so little about both the capabilities of the current models and the trajectory of those capability improvements,” he warned. “So we really don’t know what models a year from now will be capable of, and we don’t know how dangerous they will be.” 

The report then identifies a gradient of access to AI foundation models from fully closed, paper publication, query API access, and modular API access to gated downloadable access, non-gated downloadable access, and fully open access. Report contributor and IST adjunct advisor Chloe Autio noted the importance of the report’s approach to gradients of access. “The matrix provides a necessary level of nuance for policymakers worldwide who are grappling with model access,” she said. “As Zoë identifies in the report, the ‘open’ vs. ‘closed’ framing of today’s model access debate oversimplifies a more complicated environment. Defining and examining risks in the context of access provides a more concrete and specific framework to inform policy discussions.” 

The report then maps the relationship between specific types of risk at varying levels of access in the form of a matrix. Based on the results of this novel analytical approach, it draws a number of preliminary conclusions about the relationship between risk and access to AI foundation models:  

  • Uninhibited access to powerful AI models and their components significantly increases the risk these models pose across a range of categories, as well as the ability for malicious actors to abuse AI capabilities and cause harm.
  • Specifically, as access increases, the risk of malicious use (such as fraud and other crimes, the undermining of social cohesion and democratic processes, and/or the disruption of critical infrastructure), compliance failure, taking the human out of the loop, and capability overhang (model capabilities and aptitudes not envisioned by their developers) all increase. 
  • At the highest levels of access, the risk of a “race to the bottom”–a situation in which conditions in an increasingly crowded field of cutting-edge AI models might incentivize developers and leading labs to cut corners in model development–increases when assuming a “winner takes all” dynamic. 
  • As access increases, the risk of reinforcing bias–the potential for AI to inadvertently further entrench existing societal biases and economic inequality as a result of biased training data or algorithmic design–fluctuates. 

Building the Foundation for Future Efforts

Going forward, IST will conduct further research into this problem set through our AI Foundation Model Access Initiative. “As IST continues our research into safe and secure AI, we are proud to release this initial report on foundational model access. This paper is only the beginning of a larger initiative in which we will strive to outline potential solutions–technical, policy, and otherwise–to the most serious of the risks this collaborative effort has identified,” said Chief Executive Officer Philip Reiner. “We are looking forward to continuing to work with our partners across industry, government, academia and civil society to identify the benefits and dangers of generative AI and then together take the steps necessary to mitigate harm.” 

Working group member David Evan Harris drew attention to the potential risks posed by advanced AI models across levels of access, which–if left unaddressed, could cause harm to individuals, groups, and society. “Policymakers in the EU and US have already begun to pick up on these risks, but it’s important to make sure that new policies around the world are developed, enacted and enforced quickly and effectively, so that AI can be harnessed for the public good and not turned against us,” he said. 

At IST, we will continue to do our part. “The working group’s conclusions point to the need for clear technical mechanisms and policy interventions to maintain continued AI benefits while ensuring these new capabilities do not cause harm,” said IST’s Chief Trust Officer Steve Kelly. “This report lays important groundwork to inform the global debate on foundation model openness and responsible release.”