Skip to content
Sign Up for Our Newsletter
About
Close About
Open About
About the Institute for Security and Technology
Our Team
Board Of Directors
Careers
Contact Us
Featured Events
Cyber Policy Awards
Critical Effect DC
Projects
Close Projects
Open Projects
AI and NC3
Pioneering action-oriented efforts to explore how advanced AI capabilities will be integrated into nuclear command, control, and communications
AI Antitrust and National Security
Exploring how to more effectively account for national security considerations in AI antitrust cases while respecting precedent, scope, and the core principles of antitrust law
AI Risk Reduction Initiative
Assessing the emerging risks and opportunities of AI foundation models and developing risk reduction strategies
AI Chip Export Control Initiative
Safeguarding U.S. national competitiveness by closing critical compliance and enforcement gaps
AI Risk Barometer
Measuring national security professionals’ perceptions of AI futures through a technically-informed survey
CATALINK
Preventing the onset or escalation of conflict by building a resilient global communications system
Energy FIRST
Powering U.S. and allied security & prosperity through a resilient energy future
Ransomware Task Force (RTF)
Combating the ransomware threat with a cross-sector approach
Religious Voices and Responsible AI
Engaging religious communities on safe and beneficial AI
SL5 Task Force
Strengthening AI security through a multistakeholder approach
UnDisruptable27
Driving more resilient lifeline critical infrastructure for our communities
All Projects
» Explore all of IST's projects, past and current
Focus Areas
Future of Digital Security
Geopolitics of Technology
Innovation and Catastrophic Risk
Events
Insights
Contact
Search
Donate
Archive
compliance
Report
Navigating AI Compliance, Part 1: Tracing Failure Patterns in History
What is the risk of compliance failure in AI foundation models, and how can it be mitigated? The AI Risk Reduction Initiative’s latest report analyzes the history of failures across industries in an effort to avoid future pitfalls, and offers practical frameworks and definitions for practitioners navigating the complex compliance landscape. The first of a two-part series, this report examines 11 case studies from AI-adjacent industries to identify three distinct failure categories: institutional, procedural, and performance.
artificial inteligence
,
compliance
,
cybersecurity
,
Future of Digital Security
December 10, 2024
Blog
IST, industry and civil society contributors release report assessing risks of increased access to AI foundation models
In recent months, a number of leading AI labs have released advanced artificial intelligence systems. While some models remain highly restricted, limiting who can access the model and its components, others provide fully open access to their model weights and architecture. The potential benefits of these more open postures are generally well understood. As a result, this effort turned our attention to the risks, seeking to answer the question: how does access to foundation models and their components impact the risk they pose to individuals, groups, and society?
AI foundation models
,
compliance
,
human in the loop
,
malicious use
,
race to the bottom
,
reinforcing bias
December 13, 2023
Search
Search
Home
About
CATALINK BRIEF
FAQ
Our Team
Why do we need crisis communications?
Activities
Events
Insights
Podcasts
Press
Get In Touch