Virtual Library

Our virtual library is an online repository of all of the reports, papers, and briefings that IST has produced, as well as works that have influenced our thinking.

Submit your Content

Reports

A Lifecycle Approach to AI Risk Reduction: Tackling the Risk of Malicious Use Amid Implications of Openness

Louie Kangeter

viewpdf

Memo

Testimony: Red Alert: Countering the Cyberthreat from China

Steve Kelly

viewpdf

Reports

Ransomware Task Force: Doubling Down

Ransomware Task Force

viewpdf

Reports

Information Sharing in the Ransomware Payment Ecosystem: Exploring the Delta Between Best Practices and Existing Mechanisms

Zoë Brammer

viewpdf

Memo

Testimony: Held for Ransom: How Ransomware Endangers Our Financial System

Megan Stifel

viewpdf

Memo

Roadmap to Potential Prohibition of Ransomware Payments

Ransomware Task Force Co-Chairs

viewpdf

Reports

Unlocking U.S. Technological Competitiveness: Evaluating Initial Solutions to Public-Private Misalignments

Ben Purser, Pavneet Singh

viewpdf

Contribute to our Library!

We also welcome additional suggestions from readers, and will consider adding further resources as so much of our work has come through crowd-sourced collaboration already. If, for any chance you are an author whose work is listed here and you do not wish it to be listed in our repository, please, let us know.

SUBMIT CONTENT

Assessing the Strategic Effects of Artificial Intelligence

Center for Global Security Research, Lawerence Livermore National Lab. Institute for Security and Technology. Paige Gasser, Rafael Loss, Andrew Reddie

SUMMARY

On September 20-21, the Center for Global Security Research (CGSR) at Lawrence Livermore National Laboratory (LLNL), in collaboration with Technology for Global Security (Tech4GS), hosted a workshop to examine the implications of advances in artificial intelligence (AI) on international security and strategic stability. Participating policymakers, scholars, technical experts, and representatives of various private sector organizations addressed the central question of whether the United States government should consider adjusting its approach to nuclear deterrence and strategic stability in light of the wide range of developments in the AI field. The workshop examined the potential risks and opportunities presented by military applications of AI and assessed which of these require consideration in the near term—and which might be exaggerated. For the purposes of the workshop, we took a broad view of potential future applications of AI, including enablers of autonomous action; tools for decision support, simulation and modeling; and tools for collecting and analyzing very large volumes of information. We sought to understand the differences between near term impacts and potential longer-term possibilities, which are of course more difficult to forecast.

download pdf