As the 21st-century geopolitical balance shifts in uncertain ways, there is an increasing eagerness to deploy AI technologies into both the physical and digital battlefields to gain both tactical and strategic advantage over adversaries. However, the nature of increasingly powerful and unpredictable AI demands a measured and balanced approach to deploying these tools before the limitations, risks, and vulnerabilities are fully understood and addressed. Indeed, these technologies may not currently be “ready for primetime” on a number of levels.
We begin this discussion – meant to be a series of posts on this domain of issues – focused on the following. This initial paper is based off of numerous small-group workshops and ongoing engagement with the AI research community in the San Francisco Bay Area.
- Current AI capabilities remain limited to narrow, well-defined domains. ​
- The “black box” nature of state-of-the-art AI/ML algorithms gives limited insight as to their decision-making processes – and conclusions.
- Deploying AI’s could preempt ethical considerations that have yet to be fully understood, identified, or agreed upon, and is in the potential context of an industry-driven race to the bottom.
The Institute for Security and Technology is the 501(c)(3) critical action think tank that unites technology and policy leaders to create solutions to emerging security challenges. IST stands at the forefront of convening policymakers, technology experts, and industry leaders to identify and translate discourse into impact.
The Center for Global Security Research (CGSR) was established at Lawrence Livermore National Laboratory (LLNL) in 1996 to serve as a bridge between the science, technology, and national security policy communities. It focuses on emerging national security challenges in the areas of deterrence, assurance, and strategic stability.