Virtual Library

Our virtual library is an online repository of all of the reports, papers, and briefings that IST has produced, as well as works that have influenced our thinking.

Submit your Content

Podcasts

TechnologIST Talks: Looking Back and Looking Ahead: Deep Dive on the New Cybersecurity Executive Order

Carole House, Megan Stifel, and Steve Kelly

view

Podcasts

TechnologIST Talks: The Offense-Defense Balance

Philip Reiner and Heather Adkins

view

Reports

The Generative Identity Initiative: Exploring Generative AI’s Impact on Cognition, Society, and the Future

Gabrielle Tran, Eric Davis

viewpdf

Podcasts

TechnologIST Talks: A Transatlantic Perspective on Quantum Tech

Megan Stifel and Markus Pflitsch

view

Podcasts

TechnologIST Talks: The Future is Quantum

Megan Stifel and Stefan Leichenauer

view

Reports

Navigating AI Compliance, Part 1: Tracing Failure Patterns in History

Mariami Tkeshelashvili, Tiffany Saade

viewpdf

Podcasts

TechnologIST Talks: The Cleantech Boom

Steve Kelly and Dr. Alex Gagnon

view

Contribute to our Library!

We also welcome additional suggestions from readers, and will consider adding further resources as so much of our work has come through crowd-sourced collaboration already. If, for any chance you are an author whose work is listed here and you do not wish it to be listed in our repository, please, let us know.

SUBMIT CONTENT

AI-NC3 Integration in an Adversarial Context: Strategic Stability Risks and Confidence Building Measures

Alexa Wehsener, Andrew W. Reddie, Leah Walker, Philip Reiner

SUMMARY

Over the past year, the IST team has been working to examine the strategic stability risks posed by integrating AI technologies into nuclear command, control and communications systems across the globe. Sponsored by the U.S. Department of State’s Bureau of Arms Control, Verification, and Compliance, the research aimed to specify the vulnerabilities to strategic stability generated by AI technologies. The project brought together technical AI researchers, policymakers, academics, and industry. Project leaders examined the use of a suite of policy tools in the nuclear context–from unilateral AI principles and codes of conduct to multilateral consensus about the appropriate applications of AI systems.

Sustained strategic stability will require nuclear weapons states to share their understandings of the risks of emerging technologies across both civilian and military domains.

The results of this study suggest fitting CBMs into four categories:

  1. CBMs that involve agreeing to, or communicating an intent to, renounce or limit the use of AI technologies in certain systems.
  2. CBMs that encourage governments and industry players to agree on standards, guidelines, and norms related to AI trust and safety, as well as “responsible” use of AI technologies.
  3. CBMs that increase lines of communication, such as hotlines and crisis communications links, and/or improve the quality, reliability, and security of communications in crisis.
  4. CBMs that encourage education and training for policymakers, decision makers, and diplomats on AI knowledge and sharing of best practices across the public and the private sectors.

For media inquiries, please contact Sophia Mauro at [email protected].

download pdf