Artificial Intelligence

Catalyzing Security in AI Governance

“The” AI Spring?

Years from now, we will likely view Spring of 2023 as “the” AI Spring, distinct from the oft-discussed AI Springs of the past. Whereas the use of social media and enhanced Internet connectivity during the Arab Spring helped enable political movements years in the making, the current AI Spring evidences massive accelerations in technological advances that have also been underway for decades. In this case, governments, businesses, and individuals are grappling with the likely profound impact that AI will have, while acknowledging that the extent of the impact remains in many ways unpredictable. 

Above all, we must collectively approach the current, markedly more impactful AI Spring with a commitment to catalyzing the role of security in AI governance, thereby contributing to digital sustainability. It remains to be seen how emerging governance approaches will impact the nascent AI revolution, particularly in terms of AI’s ramifications for equitable and sustainable digital security. At IST we have been observing these recent developments with interest; conversations around AI and governance align with our own mission to harness opportunities enabled by emerging technologies while also mitigating their attendant risks. We believe this work will support digital sustainability, where all stakeholders—public and private—recognize their role in innovating and building a more secure and trustworthy digitally-enabled future. We have been paying particular attention to the security of AI, noting the impact of market incentives that historically have allowed AI developers and companies to treat security as an afterthought. We agree with those who identify safety and security as core elements of AI governance, supported by robust transparency around efforts taken to advance these priorities. We intend to champion these elements of the discourse. This blog outlines some of our early thinking and approach.

Current Approaches to AI Governance

In the current public policy conversations on AI, one sees several “camps” emerge that share several areas of commonality around governance, with most seeking to avert a suboptimal AI-enabled future. One camp includes leading concerned technical researchers, who by way of example recently called for a 6 month pause on “giant AI experiments,” urging that stakeholders “jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts” during that time. This camp also called for AI research and design efforts to refocus on making the systems “more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal.” More recently, many leading researchers released a statement highlighting their perspectives on the most urgent advanced AI risks, and pressed decision makers and citizens alike to recognize these risks on a societal-scale. The security risks posed by these AI capabilities are not well understood, and policy decisions remain challenging without a proper technical understanding of the threat. Entities such as OECD, the Global Partnership on AI, the UN, and the EU have already undertaken efforts to quantify the risks posed by emerging AI technologies; going forward, it will be important to leverage these existing avenues for discourse. 

An additional camp, including many cybersecurity professionals, has been raising concerns about the security of AI systems themselves. The risks they have identified take two forms; the security risks these systems could pose both as a result of actors who alter them for malicious purposes, as well as risks resulting from insufficient measures undertaken to anticipate AI’s misuse as a tool in the existing cybercrime landscape or for other criminal purposes. The Open AI Data breach brought additional visibility to the need for security as a cornerstone of AI governance. CISA Director Jen Easterly echoed this sentiment in her recent speech. Members of Congress, including Senators Warner and Hawley, have also placed a heavy emphasis on security. 

AI companies represent a third camp in the current dialogue. While they may not be completely aligned on priorities for AI governance, they have been engaged in the ongoing dialogue. They have shared their perspectives and are working to shape the conversations, particularly the regulatory aspects not only in their visits to Washington, but also in their public commentary. Google, for example, put forth a policy agenda for responsible AI progress, highlighting the need for collaborative efforts across government, industry, and civil society to help translate technological breakthroughs into widespread benefits while mitigating risks. The agenda encourages governments to focus on “unlocking opportunity, promoting responsibility, and enhancing security.” In a blog, Microsoft similarly outlined a five-point blueprint for AI governance centered around principles like safety, transparency, and public-private partnerships. In another example, OpenAI CEO Sam Altman endorsed the idea of a new federal agency to oversee AI, and urged the government to develop stronger safety rules for AI, including through licensing standards and changes to Section 230 of the Communications Decency Act, which provides liability protections for Internet platforms. 

In the wake of these developments, the Biden Administration has identified three key areas of focus in its recent discussions: the need for corporate transparency surrounding AI systems; the ability to evaluate, verify, and validate the safety, security, and efficacy of AI systems; and the need for sufficient security to prevent abuse from malicious actors and attacks. On May 23, 2023, the Administration offered additional insights on its perspective and opportunities for interested parties to engage in the dialogue. Importantly, topics covered in its request for information (RFI) include national security, as well as efforts to bolster innovation and democracy. It is on these issues that we at IST intend to focus, leveraging our own experience on AI and other digital security issues.

IST’s Established Record

Our focus on AI is not new. IST has a proven track record of tackling artificial intelligence and machine learning (ML) challenges through a number of different lenses. Our most recent project examined the strategic stability risks posed by integrating AI and ML technologies with nuclear command, control, and communications systems (NC3) across the globe. This multistakeholder analysis focused on the imperative to manage and mitigate the risks posed by AI-enabled emerging technologies. Project leaders examined the use of a suite of policy tools in the nuclear context, from unilateral AI principles and codes of conduct to multilateral consensus about the appropriate applications of AI systems.

The project found that while there are numerous conversations among academics concerning the potential regulation of compute power, data centers, data, and human capital as proxies for AI capabilities, it remains unclear whether future governance arrangements are best oriented toward the technologies themselves or the ways in which they are used (i.e., the use cases). We recommended that exercises to clarify the costs and benefits of AI-NC3 integration with engagement from both public and private sector institutions have an important role to play in these conversations, particularly given the proliferation of abstract claims in both the technical and policy fields. 

Our work on Digital Cognition and Democracy (DCDI) and the Ransomware Task Force (RTF) also impacts our thinking on AI. DCDI took an escalating, three-tiered approach to examine how effects of digital technologies on cognitive processes affect users and democracy. Key among our DCDI findings is that 1) digital technologies affect and manipulate cognition, and 2) those technologies that outsource cognitive functions like memory can dramatically impact our metacognition and lead to particularly negative effects in our ability to reduce our susceptibility to disinformation and affective polarization. 

To mitigate these findings, DCDI proposed a framework that identifies 12 risks emerging from 4 main features of technology in our increasingly digital world: 1) Design and Gamification; 2) Unnaturally Immersive and Easy Experience; 3) Lack of Friction; and 4) Information Overload. It is through the identification of these specific risks within broad, technology-driven domains that focused efforts can work to mitigate the threats to democracy we see today.

Similarly, in standing up the RTF, we convened over 60 organizations to develop multistakeholder-based recommendations to combat ransomware, a top national security threat that, in addition to technical approaches, also requires governance-oriented solutions. The Task Force identified 48 recommendations to help organizations better prepare for, respond to, disrupt, and deter ransomware. In the two years since the April 2021 Report’s release, 92% of the recommendations have seen progress, with 50% seeing significant progress including through legislation and improved public-private collaboration. Among the Task Force’s observations was that vulnerabilities in underlying software, together with human error, are key to ransomware’s success; closing these gaps remains a high priority challenge in reducing the ransomware threat, and, as with most digital governance, requires a multistakeholder approach. 

Trust, Safety, and Security: Essential Elements of AI and Digital Sustainability 

Observing this AI Spring with the perspectives developed through our recent work, we think it is essential to emphasize the importance of trust, safety, and security.  Above all, we recognize that there are some topics in today’s AI conversations that need additional volume and support, while bearing in mind the impact current market forces can have on the direction of these conversations. In using our voice on these topics, we will examine carefully and critically the perspectives of each camp, strive for honest and inclusive discourse, and offer solutions that better account for trust, safety, and security as the AI seasons progress and evolve. At the end of the day, digital security is a critical enabler of most societal priorities—public and private. At IST, we will use our voice to ensure its consistent consideration in public discourse and across industry and governments in order to support a more sustainable digital future.