Artificial Intelligence and Advanced Computing

Harnessing opportunity and mitigating risk in security applications

Advances in AI-enabled technologies are significantly disrupting technical development, patterns of governance, and the risks of war. The relationship between government and the technology sector has particular consequences for national security when it comes to advanced computing. Building on our early work on the integration of AI into nuclear command, control, and communications systems, IST is engaging deep technical expertise to assess the architectures, tools, opportunities, and risks at the intersection of AI and cybersecurity across national security applications. This work includes analysis of open source models, advanced technique proliferation, and intentional mapping of policy conversations to enable industry and government to effectively provide essential input.

“To ensure that diplomatic efforts are informed by technical expertise, transparency and trust-building efforts must revolve around collaboration between senior decision makers and the AI safety and alignment research communities.”
Forecasting the AI and Nuclear Landscape

Featured Content

The Implications of Artificial Intelligence in Cybersecurity: Shifting the Offense-Defense Balance
How is AI altering the offense-defense balance in cybersecurity? After surveying practitioners and researchers in the cybersecurity industry on the current–and potential future–uses of AI that they have seen deployed, this report synthesizes these findings into 7 key recommendations to help harness AI for protection, better understand how it might be weaponized by bad actors, and address digital security threats that contribute to systemic global risk.   
October 2024 | Report

How Does Access Impact Risk? Assessing AI Foundation Model Risk Along A Gradient of Access
In recent months, a number of leading AI labs have released advanced AI systems. While some models remain highly restricted, limiting who can access the model and its components, others provide fully open access to their model weights and architecture. To date, there is no clear method for understanding the risks that can arise as access to these models increases. On December 13, 2023, IST and contributors released a report and matrix that seeks to address this gap. 
December 2023 | Report

IST Hosts Convening on Responsible Use of Artificial Intelligence with U.S. State Department and Private Sector Partners
On October 25, 2023, the Institute of Security and Technology’s (IST) Innovation and Catastrophic Risk team hosted Assistant Secretary of State for Arms Control, Verification and Compliance (AVC) Mallory Stewart in San Francisco for a convening on the responsible use of artificial intelligence (AI). This private roundtable connected the U.S. Department of State with national security policy experts and private sector leaders in the Bay Area actively involved in the development and deployment of AI technologies, the research behind them, and their production at scale.
October 2023 | Blog

AI-NC3 in an Adversarial Context

AI-NC3 Integration in an Adversarial Context 
Sponsored by the U.S. Department of State’s Bureau of Arms Control, Verification, and Compliance, this research presents a novel set of scaled confidence building measures, based on a range of levels of effort and international collaboration. 
February 2023 | Report

Forecasting the AI and Nuclear Landscape

Forecasting the AI and Nuclear Landscape 
This report, the product of a partnership between IST and Metaculus, assessed the risks of escalation between the United States and China, including by the integration of artificial intelligence into nuclear command, control, and communications.
September 2022 | Report

IST in the News

Philip Reiner on Nuclear Command, Control, and Communications
Philip Reiner joined the Future of Life Institute podcast to talk about the integration of AI into nuclear command, control, and communications. He noted the ever-tightening decision cycle of integrating machine learning into NC3: “If the United States sees China heading down that path… you’re almost being irresponsible to not head down that same path, to figure out what you’re capable of and what you may need to be doing in order to stay ahead and/or deter another nation from getting ahead of you and putting you at risk.”
October 2022 | Podcast

When machine learning comes to nuclear communication systems
Philip Reiner, Alexa Wehsener, and M. Nina Miller underline the importance of credible NC3 systems in an op-ed for C4ISRNET. “Nuclear deterrence depends on fragile, human perceptions of credibility,” they write. “The United States and its competitors should take care that these new tools do not inadvertently accelerate crisis instability or an arms race.”
April 2020 | Commentary

The Real Value of Artificial Intelligence in Nuclear Command and Control
In an op-ed in War on the Rocks, Philip Reiner and Alexa Wehsener contend that, “various novel artificial intelligence techniques likely under consideration to improve America’s NC3 systems, while certainly entailing risk, also hold the possibility of generous rewards.” Yet they call for “great care” when considering the degree to which deep learning is integrated into future NC3 systems.
November 2019 | Commentary

Content

AI and the Military: Forever Altering Strategic Stability
Should we be concerned about an AI “arms race”? This paper calibrates the potential risks and rewards of military applications of AI technologies and determines which issues demand further research and action. 
February 2019 | Report

Roundtable Discussion: AI and Human Decision-Making 
This workshop summary outlines the roundtable discussion hosted by Technology for Global Security and the Center for Global Security Research to explore how artificial intelligence-related techniques and tools will impact international security policymaking. 
November 2018 | Report

AI and Human Decision Making: AI and the Battlefield
Current AI capabilities remain limited to narrow, well-defined domains. The “black box” nature of state-of-the-art AI/ML algorithms gives limited insight as to their decision-making processes and conclusions. Furthermore, ethical implications of AI have yet to be fully understood, identified, or agreed upon. 
November 2018 | Report

Assessing the Strategic Effects of Artificial Intelligence
This workshop summary describes the roundtable discussion on whether the United States government should consider adjusting its approach to nuclear deterrence and strategic stability in light of the wide range of developments in the AI field, hosted by the Center for Global Security Research at Lawrence Livermore National Laboratory, in collaboration with Technology for Global Security. 
September 2018 | Report

Projects

Machine Learning and U.S. Nuclear Command Control and Communications

This project dives deeper into the mapping of the U.S. NC3 architecture to utilize open-source information to as accurately as possible map the current NC3 architecture at the sub-component level, identify which subcomponents are plausible candidates for potential integration of machine learning and deep learning-driven capabilities, and identify risks and opportunities that this creates for the U.S. NC3 system of systems. 

How Nations Respond to Accidents in Complex Systems

Each country perceives accidents in complex systems differently, and some do not even acknowledge that they exist. By studying how six nuclear-armed countries responded to complex system accidents in the past, the Institute for Security and Technology aims to anticipate and inform potential responses to these dangerous—and yet almost guaranteed—future accidents in highly complex systems.

Artificial Intelligence and Strategic Stability

The Institute for Security and Technology (previously Technology for Global Security or Tech4GS) set out in 2017 to understand the potential risks posed by novel AI techniques to international security. We hosted a series of workshops to better understand the potential role AI will play in international stability and deterrence. ​

From the NatSpecs Blog