IST commends the Biden-Harris Administration for this week releasing its Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence. The American people–and broader global community–are justifiably seized by the potential of these rapidly advancing technologies, as they will unleash a new era of efficiencies and scientific breakthroughs. But with these advancements come risks that must be managed. IST looks forward to playing a catalyzing role in helping identify gaps, shape solutions, and inform the intense work that still needs to be done.
IST has a proven track record of tackling the safety and security implications of novel Artificial Intelligence (AI) capabilities. Since 2015, IST has completed a series of multistakeholder efforts on the risks posed by the integration of AI and machine learning into nuclear command, control, and communications (NC3), supported by the Lawrence Livermore National Laboratory and the State Department. These included deliverables on AI and strategic stability, AI in human decision-making, and AI safety solutions and confidence building measures.
Consistent with our mission to build bridges between technologists and policymakers on emerging security challenges, IST will support the order’s implementation, to include ongoing and planned efforts on the following issues:
- Section 4.2(a) of this order seeks to safeguard highly capable “dual-use” foundation models against espionage or digital subversion and Section 4.6 seeks to understand the implications of models with “widely available weights.” IST has been studying the risks and opportunities of open access and open source model components through a multistakeholder working group and will soon publish a matrix assessing risk across a gradient of access–from fully closed to fully open–to inform both AI labs and policymakers in managing public safety and national security risks.
- IST has been closely tracking the Administration’s ongoing implementation of E.O. 13984 to address foreign malicious cyber actors’ use of domestic Infrastructure as a Service (IaaS) products, and notes Section 4.2(c) of this new order requires similar due diligence with regard to AI training runs. As both orders allow IaaS providers to propose alternative means to manage the specified risks, we look forward to assisting our industry partners in pursuing innovative approaches to addressing these risks through our Applied Trust & Safety initiative.
- IST applauds the order’s attention to managing AI in critical infrastructure and cybersecurity (Section 4.3), as this technology has the potential to greatly improve defenses and optimize critical infrastructure functions. But it can also be put to use by malicious state and non-state actors. IST is launching a collaborative effort to get to ground truth on the state of AI’s use on both fronts, and its outlook. We are also highly interested in understanding how AI may be safely implemented in critical infrastructure controls, particularly in light of the nation’s rapidly growing distributed green energy deployments. We will soon be announcing our first steps on this initiative with our collaborators.
About IST
The Institute for Security and Technology (IST) is the 501(c)(3) critical action think tank that unites technology and policy leaders to create solutions to emerging security challenges. IST stands at the forefront of convening policymakers, technology experts, and industry leaders to identify and translate discourse into impact. We take collaborative action to advance national security and global stability through technology built on trust, guiding businesses and governments with hands-on expertise, in-depth analysis, and a global network.