Artificial Intelligence and Strategic Stability

As countries around the world increase their focus on machine and artificial intelligence (AI) as the fundamental driver behind the next generation of warfare, these advances in AI-related technologies have the potential to significantly disrupt the nature of human-machine interaction - and potentially the assumptions underpinning warfare more broadly.

Artificial intelligence has burst upon the national security scene with an intensity to surprise even the most veteran observers of the national policy discourse. The renewed spike of interest is driven in part by popular characterizations of novel AI techniques as revolutionary; by the rapid absorption of nascent AI-based technologies – primarily driven by novel machine learning techniques – into diverse sectors of the global economy; but also by the great power ambitions of America’s competitors and potential adversaries. There are mounting fears that the United States is under-prepared to manage these new challenges, and that it will end up “offset” due to the sheer scale at which adversaries intend to deploy AI. Could AI disrupt and reshape the strategic international balance? Will imbalances and changing perceptions of capabilities undermine the status quo of what is needed to maintain strategic stability between near-peer powers?

As this most recent wave of novel AI approaches began to gain the attention of the national security and technology communities, there lacked a clear consensus for the understanding of:

  • The near- and long-term opportunities and risks of AI, and
  • The means to mitigate potentially destabilizing advances at the policy, legislative, and technological level.

In 2018, the Institute for Security and Technology (then known as the Technology for Global Security, or Tech4GS) undertook a joint initiative with the Center for Global Security and Research at Lawrence Livermore National Laboratory to understand and manage the long-term opportunities and risks posed by AI-related technologies for international security and warfare. It is through our work with CGSR that we aimed to begin a years-long process of building a repository of the information and insight necessary to help fill this gap. 

As part of this effort, on September 20-21, 2018, the Center for Global Security Research (CGSR) at Lawrence Livermore National Laboratory (LLNL), in collaboration with IST, hosted a workshop to examine the implications of advances in artificial intelligence (AI) on international security and strategic stability. Specifically, participating policymakers, scholars, technical experts, and representatives of various private sector organizations addressed the central question of whether the United States government should consider adjusting its approach to nuclear deterrence and strategic stability in light of the wide range of developments in the AI field. The workshop examined the potential risks and opportunities presented by military applications of AI and assessed which of these require consideration in the near term—and which might be exaggerated. 

For the purposes of the workshop, we took a broad view of potential future applications of AI, including enablers of autonomous action; tools for decision support, simulation and modeling; and tools for collecting and analyzing very large volumes of information. We sought to understand the differences between near term impacts and potential longer-term possibilities, which are of course more difficult to forecast. We discussed 5 key components: 

  • Revisiting Strategic Stability and Recent Developments in AI
  • Comparing AI Adoption and Integration Across Countries
  • AI and Deterrence Across Domains
  • Operationalizing Automation and AI for the Battlefield
  • Ensuring Strategic Stability in the Age of AI

Read about the workshop in our findings report: Assessing the Strategic Effects of Artificial Intelligence 

Further discussions convened by IST (formerly Tech4GS) and CGSR in June and September 2018 and based in the California Bay Area, continued to engage a diverse mix of public and private sector experts in an exploration of the following roles and consequences of AI in the 21st century security context. Questions and arguments developed via these workshops have continued to frame our thought leadership: 

  • Which technologies have potential near-term military applications, and which do not?
  • Of those, which are potentially consequential for strategic stability? How, and why? How could AI alter the fundamental calculus of deterrence?
  • How could AI-assisted military platforms affect regional stability, and what is the connection between regional stability and strategic deterrence?
  • How will global competition in applying AI to military missions affect strategic stability? Should we be concerned about an “AI arms race”?
  • What are the risks of unintended consequences and strategic surprise driven by AI?

Read more about these questions and the answers we derived in this initial series here: AI and the Military: Forever Altering Strategic Stability. The paper aims to answer some of the questions, but more broadly, to contribute to the growing body of research and analysis surrounding AI and warfare, while calibrating the potential risks and rewards of military applications of AI technologies and determining which issues demand further research and action.

From our Library


Assessing the Strategic Effects of Artificial Intelligence

Center for Global Security Research, Lawerence Livermore National Lab. Institute for Security and Technology. Paige Gasser, Rafael Loss, Andrew Reddie



AI and the Military: Forever Altering Strategic Stability

Institute for Security and Technology, Center for Global Security Research