Earlier this year, IST brought together experts from academia, industry, government, the military, and civil society for a scenario-driven workshop focused on a central question: How will the integration of novel artificial intelligence into NC3 systems over the next five years transform strategic stability and deterrence dynamics?
Each participant in the workshop brought different perspectives and areas of expertise. Some specialized in human-machine interaction or researched AI decision-making algorithms, while others worked directly with U.S. nuclear systems or engaged with international organizations on ensuring nuclear deterrence.
In order to answer this guiding question, participants needed to get on the same page about the nature of nuclear command, control, and communications, or NC3 for short. They needed to develop a common understanding of what NC3 is, how it ensures the authorized use of nuclear weapons, and what systems specifically could be–or have already been—integrated with artificial intelligence.
In a primer originally drafted to guide discussions at the workshop and now available to the public, authors Alice Saltini, Sylvia Mishra, and Philip Reiner provide a technical, in-depth analysis of the systems and subsystems that comprise the U.S. NC3 architecture. To learn more about the primer and what it contains, we sat down for a Q&A with author and Director of Nuclear Policy Sylvia Mishra.
Q&A: NC3 and Artificial Intelligence
What exactly is the “fourth leg” of the U.S. nuclear enterprise?
Nuclear deterrence rests on the nuclear triad—a three-legged structure consisting of land-based intercontinental ballistic missiles (ICBMs), sea-based submarine-launched ballistic missiles (SLBMs), and air-based strategic bombers. Just as a three-legged stool provides stability, nuclear weapons policy planners designed the nuclear triad to ensure that even if adversaries could neutralize one or two delivery systems, at least one would survive to provide deterrence and defend against adversaries.
Nuclear Command, Control, and Communications (NC3) systems are referred to as the “fourth leg” of the U.S. nuclear enterprise because they connect all three components of nuclear deterrence to the U.S. President at all times. Cold War military planners realized that the three delivery systems would be rendered useless without the ability to detect attacks, communicate with U.S. leadership, issue authorized launch commands, and coordinate responses.
Let’s take a step back. Why is nuclear command, control, and communications so critical for global stability? What does it do, and why do we need it?
There are approximately 204 systems that make up the nuclear command and control enterprise’s systems and subsystems. NC3 is critical for global stability, as it serves as the essential link between political decision-making and nuclear weapons. The overarching objective of NC3 is to support presidential decision-making in a crisis, which requires providing accurate information about nuclear use and incoming threats, facilitating communication with advisors, and executing nuclear strikes. Military planners view it as the “nervous system” that connects nuclear weapons to legitimate authority.
Nuclear command and control does not merely act as a supporting system linking the three legs of nuclear deterrence—it forms the essential foundation upon which the entire nuclear enterprise rests. Without it, the system cannot function effectively. Nuclear deterrence depends not just on having weapons, but on maintaining reliable command and control over them at all times.
Reliable command and control of nuclear weapons serves multiple critical functions: establishing and maintaining credible deterrence; preventing unauthorized use of nuclear weapons; ensuring crisis communication and management to allow for measured responses and de-escalation opportunities; and reducing reliance on automated hair-trigger alerts that could pressure policymakers into “use it or lose it” decisions during crises.
In your paper, you zero in on three NC3 subsystems: Strategic Warning; Decision Support; and Adaptive Targeting. What are they, and why focus on them in particular?
We chose to focus on these three in particular for several salient reasons: first, we believe that strategic warning, decision support, and adaptive targeting subsystems are most likely to see the greatest level of AI integration; second, they are likely to create the highest level of risk; and consequently, they are the most consequential elements of the kill chain.
Strategic Warning: This can be described as the first line of defense, responsible for the early detection of potential threats. This function relies on a network of sensors—such as satellites, radars, and other intelligence-gathering systems—to monitor the operational environment. For instance, early warning systems might detect a missile launch or unusual activity in an adversary’s military posture. Current NC3 systems rely heavily on satellite data, radar signals, and other sensors to detect missile launches, military movements, or unusual military activities, meaning that any faulty identification of patterns or anomalies resulting from the integration of AI—indicating everything from changes in doctrine and planning or force posture all the way up to an impending attack—can incentivize nuclear use for compellence, pre-emptive, and/or retaliatory purposes.
Decision Support: Decision support systems provide commanders with a real-time, comprehensive view of the strategic and operational landscape. They integrate sensors, communication networks, and command centers to help leaders quickly understand complex situations and make informed decisions. By gathering early warnings of threats, fusing data from multiple sources, and presenting actionable information, these systems ensure that orders can be delivered to forces under any conditions.
Adaptive Targeting: Adaptive targeting refers to the dynamic updating and reprioritization of targets as new intelligence becomes available or as battlefield conditions change. Modern NC3 systems emphasize flexibility—the 2022 Nuclear Posture Review even identifies “adaptive nuclear planning” as one of the five essential functions of NC3. This “adaptive” component allows commanders to swiftly adjust targets as emergencies unfold. AI can significantly enhance this function by providing real-time data processing and advanced pattern recognition that automatically analyzes vast intelligence feeds, detects emerging threats and anomalies, and generates predictive models to update target priority lists. At their core, these AI-enhanced systems have the potential to continuously fuse sensor inputs, assess threat levels, and offer targeting recommendations.
There’s a lot of hype about the integration of AI into NC3. From your perspective, what’s the reality on the ground? To what extent is AI integrated into NC3 systems and subsystems?
Artificial intelligence technologies have long been integrated into NC3—this is not a novel concept. In fact, AI integration is now an unavoidable reality in many of the capabilities that feed into the nuclear command, control, and communications system of systems, particularly for threat identification, pattern recognition, and tracking for situational awareness. Right now, systems use AI in sensors for threat detection and early warning components that feed into NC3, specifically for identifying and tracking potential threats. AI also processes vast amounts of data from multiple sources to provide situational awareness, with advanced AI capabilities assisting in identifying anomalies and potential threats in intelligence feeds.
Overall, AI integration is taking place broadly throughout the NC3 enterprise. However, the central debate focuses on the extent and degree to which AI should be integrated, and what constitutes meaningful human control of nuclear weapons as advances in human-machine teaming continue to evolve.
On the flip side, what are you most concerned about when it comes to the integration of AI and NC3? What should we be on the lookout for in the next three to five years?
It is likely that the integration of AI within the NC3 structure in the United States and other nuclear-armed states will continue. For me, the most concerning aspect is that as states recognize the military advantages that AI-enabled systems might offer, more nuclear-armed states will race to integrate AI into their NC3 architectures. As this AI integration happens in a sporadic and diffused manner, we see a lack of standardization when it comes to how and to what extent AI is being integrated. This raises concerns that states could “race to the bottom,” so to speak, to integrate AI-enhanced systems and capabilities.
This arms race is particularly alarming because there is little understanding of how exactly adversaries are integrating AI. The black-box nature of this integration, combined with the lack of overall standards, creates a frightening situation, multiplying the risks we face. Additionally, there are no international standards governing the safe and secure adoption of AI in NC3 architectures. Nuclear-armed states, especially the United States, China, and Russia, have not been able to establish agreed-upon standards regarding how to test the robustness of algorithms that are integrated into critical and sensitive systems.
While we see substantial debates and discussions about appropriate limits, safeguards, and codes of conduct for AI integration, there remains little political action or development of coherent policies or joint strategies among nuclear-armed states on how to proceed with AI adoption. As nuclear-armed states make decisions about the trade-offs between the potential benefits of AI capabilities and the catastrophic risks of AI failures in nuclear systems, we must keep in mind that humanity cannot afford to play these risky games with NC3 systems.

