Virtual Library

Our virtual library is an online repository of all of the reports, papers, and briefings that IST has produced, as well as works that have influenced our thinking.

Submit your Content

Op-ed

ROOST Reminds Us Why Open Source Tools Matter

view

Reports

Navigating AI Compliance, Part 2: Risk Mitigation Strategies for Safeguarding Against Future Failures

Mariami Tkeshelashvili, Tiffany Saade

viewpdf

Reports

Deterring the Abuse of U.S. IaaS Products: Recommendations for a Consortium Approach

Steve Kelly, Tiffany Saade

viewpdf

Podcasts

TechnologIST Talks: Looking Back and Looking Ahead: Deep Dive on the New Cybersecurity Executive Order

Carole House, Megan Stifel, and Steve Kelly

view

Podcasts

TechnologIST Talks: The Offense-Defense Balance

Philip Reiner and Heather Adkins

view

Reports

The Generative Identity Initiative: Exploring Generative AI’s Impact on Cognition, Society, and the Future

Gabrielle Tran, Eric Davis

viewpdf

Podcasts

TechnologIST Talks: A Transatlantic Perspective on Quantum Tech

Megan Stifel and Markus Pflitsch

view

Contribute to our Library!

We also welcome additional suggestions from readers, and will consider adding further resources as so much of our work has come through crowd-sourced collaboration already. If, for any chance you are an author whose work is listed here and you do not wish it to be listed in our repository, please, let us know.

SUBMIT CONTENT

A Lifecycle Approach to AI Risk Reduction: Tackling the Risk of Malicious Use Amid Implications of Openness

Louie Kangeter

SUMMARY

The rapid advancement and proliferation of artificial intelligence (AI) technologies has brought forth myriad opportunities and challenges, necessitating the development of comprehensive risk mitigation strategies. Building on IST’s December 2023 report, How Does Access Impact Risk? Assessing AI Foundation Model Risk Along a Gradient of Access—which evaluated six categories of AI risk across seven levels of model access—this report provides policymakers and regulators with a robust framework for addressing these complex risks.

The report establishes five guiding principles that serve as the foundation for the proposed risk mitigation strategies: balancing innovation and risk aversion, fostering shared responsibility among stakeholders, maintaining a commitment to accuracy, developing practicable regulation, and creating adaptable and continuous oversight. 

Central to the report is the AI Lifecycle Framework, which builds on working group contributions, mainly the “upstream/downstream” framing of risks and mitigations, and breaks down the complex process of AI development into seven distinct stages: data collection and preprocessing, model architecture, model training and evaluation, model deployment, model application, user interaction, and ongoing monitoring and maintenance. By identifying the most effective points for implementing risk mitigations within each stage, the framework enables targeted interventions that align with the guiding principles.

To demonstrate the application of the AI Lifecycle Framework, the report conducts a deep dive into malicious use—one of the risks identified in the December 2023 report as negatively influenced by an increased gradient of model openness—examining five key areas: fraud and crime schemes, the undermining of social cohesion and democratic processes, human rights abuses, disruption of critical infrastructure, and state conflict. The analysis considers the historical context, current state-of-play, and outlook associated with each area.

Applying the AI Lifecycle Framework to malicious use risks reveals a range of effective risk mitigation strategies at each stage of the AI lifecycle. These strategies encompass both policy and technical interventions, such as introducing incentives for ethical data collection practices, developing secure model architectures, and implementing human oversight in high-risk AI applications. Additionally, the report acknowledges the limitations and challenges of risk mitigation throughout the gradient of open access models and emphasizes the need for ongoing research, collaboration, and adaptation.

The report concludes with a call for continued exploration of risk mitigation strategies across other risk categories and collaboration among stakeholders to refine and implement the proposed strategies. By proactively addressing AI risks while fostering innovation, the AI Lifecycle Framework serves as a valuable tool for guiding effective risk mitigation efforts in the face of rapid technological advancements.

download pdf