On April 8, 2025, the Institute for Security and Technology’s (IST) AI Risk Reduction team hosted a crisis simulation exercise titled Responding to Crisis Scenarios around AI Risks and the Future of Conflict. The workshop took place on the sidelines of the Johns Hopkins SAIS Emerging Technologies Symposium in Washington D.C.—a convening of leading voices from government, industry, academia, and the think tank community to explore the most pressing questions shaping the future of innovation and technology.
We led a hands-on exercise, exploring critical challenges at the intersection of artificial intelligence (AI) and national security. Participants began by engaging with IST’s latest research on AI risks—including malicious usage, AI system failures, and cybersecurity implications—before diving into immersive crisis scenarios.
At the heart of the simulation was a high-stakes, time-sensitive scenario involving ambiguous intelligence related to a potential loss-of-control incident at a frontier AI lab. As the scenario evolved to include signs of foreign interference, infrastructure vulnerabilities, and the possible discovery of Artificial General Intelligence (AGI) by the People’s Republic of China (PRC), we challenged participants to assess incomplete and uncertain information, coordinate across silos, and determine appropriate responses in real time. To mirror the complexity of real-world crisis response, participants assumed the roles of various stakeholders within the national security ecosystem, including U.S. government departments and agencies such as the Central Intelligence Agency (CIA) and Department of State, as well as foreign actors like the PRC. Participants navigated divergent priorities, institutional blind spots, and the practical limitations of decision-making under pressure.
Throughout the exercise, teams wrestled with urgent questions around verification of intelligence, proportional response, containment strategies, and the ethical and legal boundaries of national security action. The dynamic exercise required participants to continually adapt their approach to risk management, interagency coordination efforts, and long-term strategic planning assumptions. As one attendee, Peter Huang, remarked, “There is nothing in place—no plan, no manual—to react to any of these contingencies. And it seems impossible to develop one that is able to respond comprehensively to possible threats and scenarios. Workshops like these are so valuable in the way they can provide a starting point for thinking about future crises and how to respond to them.”
The insights generated during the workshop, both from participant deliberations and the outcomes of the simulation, will inform the next phase of research funded by the Patrick J. McGovern Foundation focused on loss-of-control and human oversight of AI systems.
The simulation made one thing clear: current crisis response models may not be equipped for the speed and ambiguity of AI-driven threats. As AI capabilities evolve, so must our frameworks for meaningful human oversight, decision-making, and coordination. By workshopping these challenges today, we take a critical step forward in understanding how we might respond when the stakes are highest.