Artificial Intelligence

IST Experts Weigh in on President Biden’s Artificial Intelligence National Security Memorandum

We commend the Biden-Harris Administration for releasing its National Security Memorandum on Advancing the United States’ Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfill National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence. This marks a significant milestone in shaping the future of AI policy, strengthening its intersection with national security, and reinforcing U.S. technological leadership. 

IST has long supported the nation’s security and technology objectives through our multi-stakeholder work in the use of AI and machine learning in nuclear command, control, and communications (NC3); AI risk reduction; and AI in cybersecurity. IST’s December 2023 report entitled How Does Access Impact Risk categorized six AI risks and assessed how they change over a gradient of access. In the subsequent report, A Lifecycle Approach to AI Risk Reduction, IST completed a deep dive into the risk of malicious use and proposed numerous risk reduction strategies mapped to seven distinct stages in the AI lifecycle. Most recently, IST published a report entitled The Implications of AI in Cybersecurity, which draws five conclusions on the state of play and future outlook across a range of cyber defensive and offensive use cases, and makes seven recommendations.

Given this perspective, we offer the following observations and reflections on the first National Security Memorandum on AI:

  • Safeguarding frontier AI from foreign intelligence threats. IST’s leadership draws on deep national security experience and has a keen appreciation for the value of AI innovation to the nation’s adversaries and competitors. In our first-hand experience, many AI innovators, particularly early stage startups, lack a full understanding of this threatscape and an appropriate security posture. We commend the memorandum’s focus on identifying and protecting critical nodes within the AI supply chain and critical technical artifacts that, if misappropriated, would lower the cost of “recreating, attaining, or using powerful AI capabilities” by bad actors (see §3.2(ii) and (ii)(c)). We encourage direct engagement between the Federal Bureau of Investigation and these critical nodes to raise threat awareness and establish appropriate security and counterintelligence measures. 
  • A structured approach toward AI risk management. In order to realize the benefits of AI and manage the risks, one must set out to understand and define the bad outcomes for which to control. IST’s report How Does Access Impact Risk did just that, identifying the risks as malicious use, compliance failure, enforcing bias, capability overhang, fueling a race to the bottom, and taking the human out of the loop. We applaud the memorandum for specifying nine risks that covered agencies must mitigate (see §4.2(c)(i)(A-I)). While our risk categories are not identical, they are easily correlated. IST will evaluate the memorandum’s categories and their framing as relevant to federal government missions, and consider updating our work to fill any risk reduction gaps.
  • Evaluating higher-risk frontier AI use cases. The memorandum calls attention to several AI use cases requiring special attention and assessments, to wit: cybersecurity (including a model’s “capacity to detect, generate, and/or exacerbate offensive cyber threats” and “code generation”), biosecurity, chemical weapons, nuclear and radiological risks, and system autonomy (see §3.3, various subparagraphs). IST’s report The Implications of AI in Cybersecurity explored AI’s current utility in several of these cybersecurity use cases. We will inevitably be further exploring the role of AI agents in numerous applications, which implicates the memorandum’s concern for system autonomy and IST’s identified risk of leaving the “human out of the loop.” These are areas in need of further research.
  • Governance within U.S. government agencies and harmonizing across them. The memorandum calls for each covered agency to designate a Chief AI Officer and form an AI Governance Board to manage, govern, and coordinate the agency’s use of AI (see  §4.2(e)(ii)(A-B)). These officers will together constitute the AI National Security Coordination Group, which will harmonize policies relating to the use of AI in national security systems (see §6(b)(ii)). This approach is a useful construct which might be emulated by any organization looking to responsibly integrate AI into their internal operations or customer-facing products.

IST recently convened a diverse group of stakeholders from industry, academia, and policymakers from both sides of the aisle to discuss the implications of AI in cybersecurity. We were heartened by the quality of discourse and genuine interest in this important but specialized topic; cybersecurity and AI remain issues ripe for bipartisan cooperation. In light of the upcoming election, we encourage the new Administration to carefully consider and retain the key elements highlighted above. Together, we can advance a secure and innovative future for our nation.