AI and NC3 Initiative Enters Phase 2: Bridging Perspectives on Risks and Opportunities

December 11, 2025

IST, with support from Longview Philanthropy, is entering phase 2 of our efforts on the integration of artificial intelligence into nuclear command, control, and communications. In its next phase, the AI and NC3 initiative will establish an executive committee and 4 working groups, driving further research on competitive AI dynamics, global perspectives on AI-NC3, AI technical development and trajectories, and AI norms and governance.

As advanced AI tools continue to evolve and potentially integrate with next generation nuclear command, control, and communications systems (NC3) and subsystems, there is a clear and urgent need for the development of frameworks, guidelines, and technical standards that ensure the safe, secure, and ethical implementation of AI in NC3 contexts. Towards that goal, in April 2025, the Institute for Security and Technology hosted a workshop bringing the policy, military, and technology community to jointly examine and unpack issues of AI and NC3 operations

Based on the findings from that seminal meeting, IST is pleased to announce additional funding from Longview Philanthropy to drive further work on finding common ground. Through our new AI and NC3 initiative, IST aims to engage policymakers and industry leaders in collaborative efforts to enhance nuclear risk reduction efforts at the intersection of AI and nuclear systems. The effort aims to ensure the safe and secure integration of AI into NC3 systems, maintain meaningful human control by keeping nuclear operators “in-the-loop,” and establish norms, codes of conduct, and testing and evaluation standards for the use of AI in nuclear systems. Through this project, IST also intends to foster a collaborative environment and build a nuclear policy community that facilitates people-to-people connections and dialogues across the nuclear-armed states.

As AI is accelerating decision-support capabilities and varied NC3 subsystems in ways that heighten competition and escalation risks, IST’s working groups will examine ways to strengthen crisis-avoidance and strategic stability among nuclear-armed states whilst preserving effective nuclear deterrence. 

In its next phase, this effort will establish an Executive Committee (EC) and four working groups. Under the guidance of the EC, the working groups will convene experts across technical, policy, military, and ethics domains to produce actionable recommendations that mitigate risks while maximizing beneficial applications of AI in NC3 systems.

The April 2025 workshop revealed that substantial work is needed to understand the scope, scale, adoption methods, and certification processes for AI integration with nuclear weapons, as well as the lack of deep analysis on its implications for nuclear deterrence and strategic stability in the context of NC3 systems.

As a result, the workshop helped us derive several key takeaways and areas for further research: 

  1. Strategic stability hinges on human control, transparency, and shared definitions. As AI increasingly becomes integrated into NC3 systems, the lack of human control, transparency, and shared definitions exacerbates the risks of miscalculation and inadvertent escalation.
  2. Real-world experimentation is urgently needed. To help stress test decision support tools under duress, the policy, military, and technical communities need to have access to tailored AI-NC3 war games.
  3. Verification and accountability are weak. No existing framework credibly addresses certification and verification of AI in military systems, much less for NC3.

The AI and NC3 initiative’s Executive Committee (EC) will serve as a high-level group of decision-makers from the policy and technical domains. The EC will be responsible for guiding and shaping the research and lines of efforts for four Working Groups (WGs): 

  1. Competitive AI Dynamics
  2. Global Perspectives on AI-NC3
  3. AI Technical Development & Trajectories
  4. AI Norms and Governance 

The efforts of these working groups are designed to build on themselves and inform each other. Ultimately, the working groups aim to produce deliverables for a variety of stakeholders, such as the U.S. Department of Defense, STRATCOM and other Combatant Commands, the National Security Council, governments of nuclear-armed states, and those building AI systems in the private sector.

Philip Reiner, IST’s CEO explains, “The rapid pace of advancement in AI and emerging technologies simply outstrips the pace of the current Department of Defense acquisition process. Similarly, the accelerating integration of advanced AI tools is not matched by corresponding governance frameworks or standardized testing and evaluation protocols for AI models. This gap creates substantial uncertainties that will have cascading effects on global strategic stability.

To address these critical questions and foster greater awareness and problem-solving, we launched this effort to bridge the gap between the technical industry and the national security community. Through comprehensive gap analysis and actionable recommendations, we aim to advance solutions to these complex challenges. I am excited that Sylvia Mishra will lead this important initiative, and I am sincerely thankful to Matt Gentzel and Carl Robichaud for their support of IST’s work on AI and nuclear policies and nuclear risk reduction efforts.”

We are grateful for the generous support of Longview Philanthropy in enabling us to drive impact on this critical issue, ultimately ensuring that nuclear-armed states are able to sustain strategic stability while maintaining effective nuclear deterrence postures.  

Related Content

MENU

GET IN TOUCH

Email: [email protected]
Send us a message: Contact

JOIN THE CATALINK MAILING LIST