AI is advancing faster than our ability to govern it. As we move toward artificial general intelligence (AGI)—and possibly superintelligence (ASI)—these systems stand to reshape global security. Despite a growing discourse on AI and security, there remains limited empirical research on how national security actors perceive, assess, and prepare for the technological shifts and emerging risks associated with the potential for AGI/ASI.
That’s why the Institute for Security and Technology (IST), with support from and in partnership with the Future of Life Institute (FLI), is launching the AI Risk Barometer project, inspired by the work of Nobel Laureate Arthur Compton during the Manhattan Project to calculate a “Compton Constant” for a potentially catastrophic nuclear accident during a test. Then, as in now, there are no clear right answers, and the project seeks to learn from leading national security stakeholders about how they view the risks and opportunities of developing ever-powerful AI. This new effort seeks to elucidate AGI and ASI capability thresholds; potential benefits and harms, including a catastrophic AI loss of control scenario; timelines; the efficacy of potential governance approaches to mitigate risk; and policymakers’ risk appetites given tradeoffs.
“National security leaders and AI researchers aren’t speaking the same language about the risks posed by cutting-edge AI,” said IST Deputy Director for AI Security Policy Mariami Tkeshelashvili. “Policymakers need clear-eyed, evidence-based insights on AI risks and on the governance tools that can buy down those risks without stifling innovation.”
“The AI Risk Barometer is a first-of-its-kind effort to capture how national security professionals themselves view the dangers of advanced AI. By systematically surveying policymakers, military planners, and technical experts, this project will provide a clearer picture of how those on the frontlines of national security are thinking about timelines, thresholds, and loss-of-control scenarios as we move toward AGI and beyond,” said Hamza Chaudhry, FLI’s AI and National Security Lead. “At FLI, we believe that grounding AI governance in the perspectives of the very people tasked with preventing catastrophe is essential if we are to align policy with the scale of the risks ahead.”
To guide these efforts, IST envisions a democratic world secured and empowered by technology built on trust. Through the AI Risk Barometer project, we strive to move beyond abstract debates and ground AI governance in empirical evidence—helping to prevent the unthinkable and shape a safer, more secure future. IST has been working at the intersection of AI and national security since 2017, addressing topics like the acceleration of multi-stakeholder coordination to mitigate emerging risks of AI; the risks and opportunities of AI integration into nuclear command, control, and communications (NC3); the implications of AI in cybersecurity; and its impact on national security and global stability. Through our Strategic Balancing Initiative, IST has also explored ways to overcome public-private misalignments in the technology development ecosystem in order to accelerate American competitiveness with the People’s Republic of China, focusing in particular on the biotech, quantum, and energy sectors. This effort also reflects FLI’s commitment to educating policymakers on the national security implications of developing AGI. By supporting rigorous, empirical work like the AI Risk Barometer, FLI seeks to bridge the gap between technical research and the policy decisions that will determine whether advanced AI strengthens or destabilizes global security.
###
About FLI:
Founded in 2014, the Future of Life Institute (FLI) is a leading non-partisan AI policy think-tank working to steer transformative technology towards benefiting humanity. FLI’s team has testified in the Senate, briefed the House AI taskforce, and held briefings with agency and Congressional leadership across various AI and national security issues, including AI integration into nuclear command and control, AI and cybersecurity, AI-military integration, AI and biosecurity, and securing the software and hardware underpinning AI models.
About IST:
Technology has the potential to unlock greater knowledge, enhance our collective capabilities, and create new opportunities for growth and innovation. However, insecure, negligent, or exploitative technological advancements can threaten global security and stability. Anticipating these issues and guiding the development of trustworthy technology is essential to preserve what we all value.
The Institute for Security and Technology (IST), the 501(c)(3) critical action think tank, stands at the forefront of this imperative, uniting policymakers, technology experts, and industry leaders to identify and translate discourse into impact. We take collaborative action to advance national security and global stability through technology built on trust, guiding businesses and governments with hands-on expertise, in-depth analysis, and a global network.