AI-NC3 Integration in an Adversarial Context: Strategic Stability Risks and Confidence Building Measures
Alexa Wehsener, Andrew W. Reddie, Leah Walker, Philip Reiner
SUMMARY
Over the past year, the IST team has been working to examine the strategic stability risks posed by integrating AI technologies into nuclear command, control and communications systems across the globe. Sponsored by the U.S. Department of State’s Bureau of Arms Control, Verification, and Compliance, the research aimed to specify the vulnerabilities to strategic stability generated by AI technologies. The project brought together technical AI researchers, policymakers, academics, and industry. Project leaders examined the use of a suite of policy tools in the nuclear context–from unilateral AI principles and codes of conduct to multilateral consensus about the appropriate applications of AI systems.
Sustained strategic stability will require nuclear weapons states to share their understandings of the risks of emerging technologies across both civilian and military domains.
The results of this study suggest fitting CBMs into four categories:
- CBMs that involve agreeing to, or communicating an intent to, renounce or limit the use of AI technologies in certain systems.
- CBMs that encourage governments and industry players to agree on standards, guidelines, and norms related to AI trust and safety, as well as “responsible” use of AI technologies.
- CBMs that increase lines of communication, such as hotlines and crisis communications links, and/or improve the quality, reliability, and security of communications in crisis.
- CBMs that encourage education and training for policymakers, decision makers, and diplomats on AI knowledge and sharing of best practices across the public and the private sectors.
For media inquiries, please contact Sophia Mauro at [email protected].
download pdf