On Monday, June 17, IST hosted a virtual discussion on the risks that AI technologies present, the myriad policies that have emerged to govern them, and mitigation strategies that might be most effective to tackle these risks.
Date:
Monday, June 17, 2024
11 am PT / 2 pm ET
About the AI Foundation Model Access Initiative:
As part of its mission to address complex security issues at the forefront of technological innovation, the Institute for Security and Technology (IST) is leading an effort to study ways in which increased access to cutting-edge AI foundation models–across a gradient of access from fully closed to fully open–drives risk and enables opportunity. Learn more
About the Event
The rapid advancement and proliferation of AI technologies has brought forth myriad opportunities and challenges, leading to a flurry of regulatory activities in the United States, United Kingdom, European Union, and beyond. But what risks, precisely, are they correcting for? And what mitigation strategies might be effective?
The Institute for Security and Technology (IST) in December sought to answer the first question in its seminal report entitled, How Does Access Impact Risk? Assessing AI Foundation Model Risk Along a Gradient of Access. In its latest report, entitled A Lifecycle Approach to AI Risk Reduction: Tackling the Risk of Malicious Use Amid Implications of Openness, IST and its multi-stakeholder working group takes on one of the six identified risk categories—malicious use—and proposes mitigation strategies mapped to the appropriate phases of the AI lifecycle.
On Monday, June 17, IST hosted a webinar with the report’s author and other experts to discuss the report’s recommendations and the topic of AI risk reduction more broadly.
Mariami Tkeshelashvili, Senior Associate for AI Security Policy, IST (Moderator)
Steve Kelly, Chief Trust Officer, IST (Introductory remarks)
Louie Kangeter, AI Policy Associate, IST
Valerie Wirtschafter, Fellow, Artificial Intelligence and Emerging Technology Initiative, Brookings
Duncan Eddy, Executive Director, Stanford Center for AI Safety
Andrew Bevz, CMO, Osavul
About the Participants
Mariami Tkeshelashvili is a Senior Associate for Artificial Intelligence Security Policy at the Institute for Security and Technology (IST) where she leads AI Foundation Model Access Initiative and works on other projects within IST related to AI/Cyber and geopolitics of technology. Mariami is also a Fellow at Johns Hopkins University Emerging Technologies Initiative where she explores transformative technologies like AI, biotech and quantum, and their profound implications for global affairs. Mariami holds a master’s degree from Johns Hopkins University School of Advanced International Studies with a focus on Technology and Innovation. Before joining IST, she worked on transatlantic technology policy issues at the Center for European Policy Analysis. Mariami previously assisted organizations in building media literacy skills to combat online malign influence and managed USAID-funded projects at the National Democratic Institute on various topics including inclusive policy making, crisis management and electoral integrity. Mariami holds a bachelor’s degree in social and political sciences, focusing on regional studies in Eurasia and the Middle East and has done extensive research in Germany and Czechia on NATO, great power competition and international security.
Louie Kangeter is an Artificial Intelligence Policy Associate at the Institute for Security and Technology (IST). Louie Kangeter has experience working in policy, market, and legal research, focusing on emerging technologies like AI. Most recently, Louie served as a Special Consultant on AI to California Attorney General Rob Bonta, where he led an initiative to identify and assess risks posed by AI integration in criminal activities and develop frameworks to assess AI’s impact on law enforcement actions and processes. Prior to his time at CA DOJ, Louie worked to integrate AI systems and robotics into sustainable aluminum manufacturing processes at CASS, Inc. Louie graduated from Emory University with a BA in Political Science in 2020.
Valerie Wirtschafter is a fellow in Foreign Policy and the Artificial Intelligence and Emerging Technology Initiative. Her research falls into two thematic areas: (1) democratic resilience and democratic erosion; and (2) artificial intelligence, technology, and the information space. Using a data driven approach, Wirtschafter’s work has helped to reframe discussions around underexplored media and novel challenges to the information space, provided new tools and methods for academic and policy research, and reshaped policy at leading tech companies.
Her academic and policy research utilizes novel data-driven strategies and methods to explore the reach of foreign influence operations, impact of content moderation and algorithms, and scope of AI generated content across the new media information ecosystem, including social media platforms, search engines, and podcasting. Her research has been featured in the New York Times, Washington Post, the Wall Street Journal, PBS NewsHour, BBC, Reuters, the Associated Press, and elsewhere.
Wirtschafter also oversees the adoption of data science methods and emerging technologies across Brookings. In this capacity, she co-led the development of Brookings’s recently adopted provisional principles for the use of generative AI, an innovative piece of work that is helping set the standard for the use of generative AI across the think tank space.
Wirtschafter received her doctorate in political science from the University of California, Los Angeles in 2021. She has designed and taught courses on international politics in the digital age and lectures on the role of data-driven analysis for policy research. Prior to her doctoral training, she worked as a researcher at the Council on Foreign Relations and as a consultant focused on global health and development issues. She is a member of the Christchurch Call Advisory Network’s Core Committee, the Global Internet Forum to Counter Terrorism’s Hash Sharing Working Group, and American Political Science Association’s Centennial Center Advisory Board.
Duncan Eddy is a research fellow in the Stanford University Department of Aeronautics and Astronautics. He completed his PhD in Aerospace Engineering from Stanford, funded by the National Defense Science and Engineering Graduate Fellowship. His current research focuses on decision-making in safety-critical, climate, and space systems, where operational decisions must be made safely and correctly while still being explainable and usable by human stakeholders.
He is currently the Executive Director of the Stanford Center for AI Safety, and a post-doctoral researcher with appointments in Mineral-X and the Stanford Intelligent Systems Laboratory (SISL).
Prior to this, he started and led the Spacecraft Operations Group at Capella Space, the first US Commercial Synthetic Aperture Radar Earth Imaging constellation. There he developed the system that led to the first fully-automated collection to delivery of radar satellite imagery from a commercial constellation. He subsequently started and led the Constellation Operations and Space Safety Groups at Project Kuiper. Most recently, he was a Principal Applied Scientist at Amazon Web Services, where he worked on building software services for large-scale distributed edge compute applications.
Andrew Bevz, CMO of Osavul, is passionate about utilizing innovative technologies and marketing strategies to drive awareness and adoption of cutting-edge AI-driven technologies. With his experience in franchising and multinational companies’ reputation management, as well as governmental sector consulting, he knows how narrative analysis is crucial for businesses, organizations, and governments. Andrew is focused on helping organizations fight disinformation and monitoring malicious actors, resulting in real-life cases, training, and policies.
Steve Kelly is Chief Trust Officer at the Institute for Security and Technology (IST). As Chief Trust Officer, Steve Kelly establishes IST’s efforts to advance the trust, safety, and security of artificial intelligence and helps lead other aspects of the organization’s work. Steve comes to IST after serving on the National Security Council (NSC) staff as Special Assistant to the President and Senior Director for Cybersecurity and Emerging Technology and retiring from the FBI as a supervisory special agent.