How will AI-related techniques have an impact on international security and stability, and what needs to be done to avoid unintended consequences?
IST set out in 2017 to understand the potential risks posed by novel AI techniques to international security. We were convinced then, as we are now, that there needs to be a much broader policy and public discussion about the role of AI-related techniques in military decision making, to include but not confined to the ongoing debate over the role of autonomous weapons.
We found, however, that much of the analysis available for critical decision making remains top-level, almost superficial. Much work remains to be done. There is an array of further research and both theoretical and practical considerations needed to begin to understand what may be one of the greatest challenges to international stability in the coming decades — and what needs to be done today in anticipation of those changes. The corollary, in our minds, may well be the massive effort undertaken by the RAND organization and others during the dawn of the nuclear era. In that light, we likely only find ourselves today as they did in the late 1940s and 50s: at the very beginning phase of understanding the implications of the technologies we are building — except now everything is moving much, much more rapidly.
On June 29, 2018, IST and the Center for Global Security Research at Lawrence Livermore National Laboratory hosted a roundtable discussion with participants from academia, research, venture capital, civil society, and industry. The discussion specifically investigated the potential security implications of these technologies as they are considered for use in military capacities. This discussion was the first in a series of workshops to better understand the potential role AI will play in international stability and deterrence.
The consensus among the discussants was that these technologies are not currently “ready for primetime,” on a number of levels.
As one participant explained, “machine learning is still very reactive — it’s just not sustainable.”
Follow-up small group workshops and ongoing engagement with the AI research community in the San Francisco Bay Area led to a white paper detailing the following:
Background