Advances in AI-enabled technologies are significantly disrupting technical development, patterns of governance, and the risks of war. The relationship between government and the technology sector has particular consequences for national security when it comes to advanced computing. Building on our early work on the integration of AI into nuclear command, control, and communications systems, IST is engaging deep technical expertise to assess the architectures, tools, opportunities, and risks at the intersection of AI and cybersecurity across national security applications. This work includes analysis of open source models, advanced technique proliferation, and intentional mapping of policy conversations to enable industry and government to effectively provide essential input.
“To ensure that diplomatic efforts are informed by technical expertise, transparency and trust-building efforts must revolve around collaboration between senior decision makers and the AI safety and alignment research communities.”
– Forecasting the AI and Nuclear Landscape
AI-NC3 Integration in an Adversarial Context
Sponsored by the U.S. Department of State’s Bureau of Arms Control, Verification, and Compliance, this research presents a novel set of scaled confidence building measures, based on a range of levels of effort and international collaboration.
February 2023 | Report
Forecasting the AI and Nuclear Landscape
This report, the product of a partnership between IST and Metaculus, assessed the risks of escalation between the United States and China, including by the integration of artificial intelligence into nuclear command, control, and communications.
September 2022 | Report
IST in the News
Philip Reiner on Nuclear Command, Control, and Communications
Philip Reiner joined the Future of Life Institute podcast to talk about the integration of AI into nuclear command, control, and communications. He noted the ever-tightening decision cycle of integrating machine learning into NC3: “If the United States sees China heading down that path… you’re almost being irresponsible to not head down that same path, to figure out what you’re capable of and what you may need to be doing in order to stay ahead and/or deter another nation from getting ahead of you and putting you at risk.”
October 2022 | Podcast
When machine learning comes to nuclear communication systems
Philip Reiner, Alexa Wehsener, and M. Nina Miller underline the importance of credible NC3 systems in an op-ed for C4ISRNET. “Nuclear deterrence depends on fragile, human perceptions of credibility,” they write. “The United States and its competitors should take care that these new tools do not inadvertently accelerate crisis instability or an arms race.”
April 2020 | Commentary
The Real Value of Artificial Intelligence in Nuclear Command and Control
In an op-ed in War on the Rocks, Philip Reiner and Alexa Wehsener contend that, “various novel artificial intelligence techniques likely under consideration to improve America’s NC3 systems, while certainly entailing risk, also hold the possibility of generous rewards.” Yet they call for “great care” when considering the degree to which deep learning is integrated into future NC3 systems.
November 2019 | Commentary
AI and the Military: Forever Altering Strategic Stability
Should we be concerned about an AI “arms race”? This paper calibrates the potential risks and rewards of military applications of AI technologies and determines which issues demand further research and action.
February 2019 | Report
Roundtable Discussion: AI and Human Decision-Making
This workshop summary outlines the roundtable discussion hosted by Technology for Global Security and the Center for Global Security Research to explore how artificial intelligence-related techniques and tools will impact international security policymaking.
November 2018 | Report
AI and Human Decision Making: AI and the Battlefield
Current AI capabilities remain limited to narrow, well-defined domains. The “black box” nature of state-of-the-art AI/ML algorithms gives limited insight as to their decision-making processes and conclusions. Furthermore, ethical implications of AI have yet to be fully understood, identified, or agreed upon.
November 2018 | Report
Assessing the Strategic Effects of Artificial Intelligence
This workshop summary describes the roundtable discussion on whether the United States government should consider adjusting its approach to nuclear deterrence and strategic stability in light of the wide range of developments in the AI field, hosted by the Center for Global Security Research at Lawrence Livermore National Laboratory, in collaboration with Technology for Global Security.
September 2018 | Report
This project dives deeper into the mapping of the U.S. NC3 architecture to utilize open-source information to as accurately as possible map the current NC3 architecture at the sub-component level, identify which subcomponents are plausible candidates for potential integration of machine learning and deep learning-driven capabilities, and identify risks and opportunities that this creates for the U.S. NC3 system of systems.
Each country perceives accidents in complex systems differently, and some do not even acknowledge that they exist. By studying how six nuclear-armed countries responded to complex system accidents in the past, the Institute for Security and Technology aims to anticipate and inform potential responses to these dangerous—and yet almost guaranteed—future accidents in highly complex systems.
The Institute for Security and Technology (previously Technology for Global Security or Tech4GS) set out in 2017 to understand the potential risks posed by novel AI techniques to international security. We hosted a series of workshops to better understand the potential role AI will play in international stability and deterrence.
From the NatSpecs Blog