On April 10, 2025, the Institute for Security and Technology’s (IST) Innovation and Catastrophic Risk team hosted a scenario-driven workshop titled “The Risks and Opportunities of AI in NC3: Finding Common Ground,” with cross-sector experts in Washington, DC.
The workshop achieved IST’s long-held goal of bringing together the policy, military, and technology fields in an unprecedented manner. At the workshop, sixty senior officials, technical experts, and civil society thought leaders engaged in an in-depth analysis and practical discussion on how to safely and reliably integrate artificial intelligence (AI) into Nuclear Command, Control, and Communications (NC3) systems. The gathering of subject matter experts collectively brainstormed on this crucial national security challenge, identifying shared risks and opportunities presented by the potential integration of cutting-edge artificial intelligence capabilities into key NC3 dimensions: strategic warning, decision support, and adaptive targeting.
This unique combination of experts and stakeholders generated in-depth, thoughtful discussions on AI-NC3 issues from a multidimensional perspective. The workshop featured introductory remarks from retired General Robert Kehler, former Commander of United States Strategic Command; Professor Yoshua Bengio, world-renowned AI scientist from the University of Montreal; and key industry leaders. Senior former and currently serving leaders from the Department of Defense, Lieutenant General Andrew J. Gebara, Deputy Chief of Staff for Strategic Deterrence and Nuclear Integration, HQ USAF, Ms. Rebecca Hersman, Former Director, Defense Threat Reduction Agency, and representatives from the tech industry Ms. Kathryn Harris, Head of Growth (Defense) from ScaleAI, each offered formal remarks at the pre-workshop dinner, establishing foundations for the scenario discussions developed by the IST team. Senior officials Dr. Todd Sriver, Director, NC3, Office of the Secretary of Defense for Acquisitions and Sustainment (OUSD (A&S)), Colonel Steven Wyatt, Chief NC3 Division, HQ USAF, Rose Gottemoeller, Former United States Under Secretary of State for Arms Control and International Security, United States, Department of State, and other representatives from think tanks and academia also actively contributed to the discussion.
During the lunch session, participants also heard from the former director of the Joint Artificial Intelligence Center, retired Lieutenant General Jack Shanahan. Senior military officials and technology stakeholders from Anthropic, OpenAI, Microsoft, Palantir, and other organizations actively contributed throughout the proceedings.
IST Chief Executive Officer Philip Reiner emphasized the importance of this work, saying, “Given IST’s ongoing efforts in the realm of AI and NC3 integration, we see a clear appetite and need to deeply understand these dynamics further. This work has unveiled significant disconnects: acquisitions cannot keep up with innovation, for example, and NC3 operators are not inclined to build in powerful new technical capabilities due to a lack of trust and skepticism in the quality of the data. However, in profound ways, at the same time this work has also made clear that AI is poised to seep into many key nuclear command and control systems and subsystems over time, including indicators and warning, intel fusion, comms path determination, COA development, planning, and more. Despite this trajectory, we continue to lack mechanisms for establishing credibility, signaling, and transparency, and thus maintaining the deterrent’s main intent: keeping bad things from happening at the worst moments.”
The workshop centered on a critical question: How will the integration of novel artificial intelligence into NC3 systems over the next five years transform strategic stability and deterrence dynamics? As real-world AI tools rapidly evolve in their capabilities and applications, scenarios presented to the participants at the workshop explored outstanding strategic, operational, and tactical level considerations at this critical technological intersection, designed to illuminate key policy and technical considerations.
Throughout the day, groups explored four hypothetical worlds–the AI Stability Zone, AI Arms Race, Human Controlled Stability, and Asymmetric Instability–featuring varying levels of AI integration, governance structures, and human oversight in NC3 operations. Workshop participants analyzed the implications of AI integration in NC3 operations within each of those worlds, focusing specifically on the three key dimensions of NC3 already mentioned above: strategic warning, adaptive targeting, and decision support systems.
To facilitate open and thoughtful discussion while avoiding classification constraints, the scenarios were purposefully crafted to propose alternative futures, enabling participants to engage in robust discussions and explore significant unanswered questions that need further examination. The team also benefited from the expertise and insights of IST Adjunct Advisors Doug Randall, Brooke Taylor, and Alice Saltini.
The day’s discussions underscored the power of convening across sectors, from policy and government to tech and AI labs. In keeping with IST’s tradition of building bridges across the policy and tech community, the Innovation and Catastrophic Risk team is committed to continuing this critical work amid the rapidly evolving geopolitical landscape. As one team member noted, “we have a lot of work to do.”
Thank you to Longview Philanthropy, and to Matt Gentzel and Carl Robichaud, whose generous support has allowed us to spearhead this initiative and establish a new collaborative foundation for the understanding of the risks and opportunities of AI in NC3.