Artificial Intelligence and International Security

How will AI-related techniques have an impact on international security and stability, and what needs to be done to avoid unintended consequences?

The Institute for Security and Technology (previously Technology for Global Security or Tech4GS) set out in 2017 to understand the potential risks posed by novel AI techniques to international security. We were convinced then, as we are now, that there needs to be a much broader policy and public discussion about the role of AI-related techniques in military decision making, to include but not confined to the ongoing debate over the role of autonomous weapons.

We remain convinced, however, that much of the analysis available for critical decision making remains top-level, almost superficial. Much work remains to be done. There is an array of further research and both theoretical and practical considerations needed to begin to understand what may be one of the greatest challenges to international stability in the coming decades — and what needs to be done today in anticipation of those changes. The corollary, in our minds, may well be the massive effort undertaken by the RAND organization and others during the dawn of the nuclear era. In that light, we likely only find ourselves today as they did in the late 1940s and 50s: at the very beginning phase of understanding the implications of the technologies we are building — except now everything is moving much, much more rapidly. 

On June 29, 2018, the Institute for Security and Technology (at the time operating under the name Technology for Global Security) and the Center for Global Security Research hosted a roundtable discussion. The discussion specifically investigated the potential security implications of these technologies as they are considered for use in military capacities. The discussion was attended by a mix of academics, research scientists, venture capitalists, civil society, and industry. This discussion was the first in a series of workshops to better understand the potential role AI will play in international stability and deterrence. ​

The consensus among the discussants was that these technologies are not currently “ready for primetime,” on a number of levels.

  • Assumptions regarding the ability for AI technologies to “predict” are over-hyped. 
  • An increase in power in a specific task does not translate to unrelated tasks: the current generation of AI remains limited to constrained environments — which war zones are not — making the deployment of current AI technologies in a military context highly unpredictable. 

As one participant explained, “machine learning is still very reactive — it’s just not sustainable.” 

  • The human-machine interface remains an extremely important element of the development of these technologies, which was highlighted in the discussion of the ‘black-box’ problem, which makes it difficult for the user to understand the exact process by which the technology comes to its conclusions/decisions. 
  • The speed with which decisions must be made, which is becoming increasingly fast, especially in wartime, means there can be limited human interference and/or participation, necessitating a predictability of the system well beyond current capabilities.

Read More in our Roundtable Synthesis Report: Roundtable Discussion: AI and Human-Decision Making 

Follow-up small group workshops and ongoing engagement with the AI research community in the San Francisco Bay Area led to a white paper detailing the following:

  • Current AI capabilities remain limited to narrow, well-defined domains
  • The ‘black box’ nature of state-of-the-art AI/ML algorithms gives limited insight as to their decision-making processes and conclusions
  • Deploying AI could preempt ethical considerations that have yet to be fully understood, identified, or agreed upon, and is in the potential context of an industry-driven race to the bottom 

Read More in our White Paper: AI and Human Decision-Making: AI and the Battlefield

Background

  • We kicked off our investigatory process in late February 2017 with an event hosted by Cooley LLC at their HQ in Palo Alto, with a panel discussion featuring Paul Saffo, John Markoff, and Randy Sabett, to delve into the question of defining the state of play. How much of the noise surrounding AI-related technologies is just hyperbole, and what can we really expect to see change in the coming years? What are the real risks?
  • Building on that foundation, a second workshop was hosted by Andreessen Horowitz in May 2017, focused more intently on establishing a baseline understanding of the broad potential societal implications of AI-related technologies, while putting the international security piece in a much broader context.
  • Our next set of workshops took place in June 2018 in Silicon Valley, as part of a joint effort with the Center for Global Security Research at Lawrence Livermore National Labs, described in reports that were released in early 2019. We investigated how policymakers should anticipate AI-related technologies impacting international security — from planning and budgeting to understanding how these technologies will impact the information that informs national security decision making.

From our Library

Reports

Roundtable Discussion: AI and Human Decision Making

Institute for Security and Technology, Center for Global Security Research

viewpdf

Reports

AI and Human Decision Making: AI and the Battlefield

Institute for Security and Technology, Center for Global Security Research

viewpdf