AI Foundation Model Access Initative

Announcing New Philanthropic Support for IST’s AI Foundation Model Access Initiative

February 12, 2024 – Artificial intelligence (AI) technologies are advancing and proliferating at an astounding pace, creating new benefits and opportunities, but if left unchecked, could cause significant harm. As the Institute for Security and Technology (IST) continues to study the risks and opportunities associated with degrees of access to AI technologies through our AI Foundation Model Access Initiative, we are thrilled to receive support from the Patrick J. McGovern Foundation to advance this important effort. 

The Patrick J. McGovern Foundation is a global, 21st-century philanthropic organization that is focused on bridging the frontiers of artificial intelligence, data science, and social impact. By advancing AI and data science solutions to address real-world problems, such as access to  quality health care, climate change, and digital access. The Foundation hopes to create an equitable and sustainable future. 

“To ensure that researchers and technologists design cutting-edge AI in a way that promotes human dignity, private companies, academia, and civil society must also build a common understanding of risk,” said Nick Cain, Director of Strategic Grants at the Patrick J. McGovern Foundation. “IST’s Foundation Model Access Initiative has already made important contributions to this effort, and we hope it will serve as a model for the type of constructive, human-centered dialogue that we believe is required for technology to truly be built for everyone.”

With the Foundation’s support, IST has launched an effort to study how increased access to cutting-edge AI foundation models–across a gradient of access from fully closed to fully open–drives risk and enables opportunity. To ensure a complete understanding of the AI ecosystem and the stakeholders involved, IST convened an informal series of iterative working sessions with representatives from leading AI labs, academia, and think tanks. The preliminary conclusions of the working group are laid out in our December 2023 report, How Does Access Impact Risk? Assessing AI Foundation Model Risk Along a Gradient of Access, authored by  Zoë Brammer with contributions and peer review from Anthony Aguirre, Markus Anderljung, Chloe Autio, Ramsay Brown, Chris Byrd, Gaia Dempsey, David Evan Harris, Vaishnavi J., Landon Klein, Sébastien Krier, Jeffrey Ladish, Nathan Lambert, Aviv Ovadya, Elizabeth Seger, and Deger Turan, along with other contributors and members of the working group who could not be named. The report and accompanying risk matrix serve as resources for policymakers grappling with the question of regulating AI technologies and for AI labs considering their business approaches. 

“Too much of the discussion around ‘open source AI’ is ill-defined and lacks nuance, but to mitigate risk, definition and precision are absolutely essential. The benefits of more open postures are generally well known, but the risks have remained broad and undefined,” explained Philip Reiner, Chief Executive Officer at IST. “Through our partnership with the Patrick J. McGovern Foundation, IST is addressing this analytical gap and identifying the risks and opportunities posed by increased access to highly capable models. We believe that this work has the potential to transform the conversation around access; we would not have been able to do it without their support and the powerful insights of the working group members.”

Moving forward, IST will leverage this partnership to engage with the broader community, soliciting feedback on the conclusions in the report and risk matrix through events and discussions. These engagements will inform potential future efforts, including the identification of voluntary versus regulatory approaches, technical mechanisms and industry frameworks, and standards for industry levels of access. 

We are grateful to the Patrick J. McGovern Foundation, whose partnership ultimately enables us to advance appropriate policy guardrails and technical mechanisms to ensure the safe and ethical deployment of AI.