Artificial Intelligence

Why I support Microsoft’s call on Protecting the Public from Abusive AI-Generated Content

By Steve Kelly on July 30, 2024

As a retired FBI Special Agent who investigated online child sexual exploitation and identity crimes earlier in my career, I write to express my appreciation to Microsoft for shining a light on the important issue of abusive AI-generated content and joining a growing chorus of voices calling for action.

The public emergence of the world wide web in 1994 and rapid adoption of internet services quickly transformed traditional crimes such as child sexual abuse, stalking, extortion, and fraud. The recent advent of widely available generative AI capabilities is again changing the game, leading to novel problems such as synthetic child sexual abuse material (CSAM); non-consensual intimate imagery; and misleading deep-fake images, audio, and video that can fuel fraud schemes and interfere in our democratic processes. These novel capabilities are here to stay, and will increasingly be a part of our lives. As we embrace AI’s promise, we must not be afraid to boldly confront and manage its risks.

Since retiring from government service one year ago, I have led AI and Trust & Safety efforts at the Institute for Security and Technology (IST), a 501(c)(3) non-profit building bridges between technologists and policymakers to tackle emerging security problems. IST’s work—and that of our colleagues across think tanks, academic institutions, and industry—has given me a new appreciation for the opportunities and risks of AI. Therefore, I would like to offer my thoughts on several topics examined in the white paper:

  • Criminalizing non-consensual intimate imagery (NCII). I support criminalizing the creation and knowing dissemination of non-consensual intimate imagery—whether synthetic or not—under federal and state law, and to create a private right of action for victims to seek relief. These dastardly acts impact real victims and cause real harm. In my view, this trend will continue to accelerate until perpetrators begin experiencing consequences.
  • Forming new public-private partnerships to investigate cases. Enacting new NCII criminal statutes brings a need for more investigators and prosecutors to work the cases. Recognizing constrained public-sector budgets, the growing volume of victim reports, and the immutable fact that a criminal statute not enforced will have no deterrent effect, we need to be creative in how the public and private sectors might join forces to address these challenges. For broader schemes, one might imagine a national consortium resembling the National Center for Missing and Exploited Children (NCMEC) that includes key technology stakeholders and federal law enforcement working together. For local victim-specific investigations and services, a network of assistance centers is needed to issue preservation requests and compulsory process to quickly identify subjects and intervene at a local level. I recognize that any such public-private model would be exceedingly complicated to devise and authorize, but I firmly believe that it would be worth the effort.
  • Deterring the creation and use of AI “deepfakes” in certain contexts. Microsoft’s white paper thoughtfully explores the topic of AI deepfakes, recognizing First Amendment-protected activities. I support efforts by Congress and state legislatures to deter the use of deepfakes in sensitive contexts such as elections, and in furtherance of financial fraud, malign foreign influence operations, and other criminal schemes.
  • Sharing risk signals across the ecosystem. As highlighted in our December blog post, “cross platform spreading” of criminal activities and violative behaviors is a perennial challenge for trust & safety practitioners. With the advent of widely available generative AI, cross-industry signal sharing is all the more necessary. Taking lessons from the Information Sharing and Analysis Center (ISAC) model within the cybersecurity industry, the Tech Coalition’s Lantern initiative on CSAM, and other analogous efforts, IST is taking initial steps to catalyze launch of such a center for Trust & Safety issues and practitioner collaboration.

While I’ve addressed only a few select themes, the white paper helpfully catalogs the breadth of generative AI’s implications on society and multiple stakeholder’s efforts in this area. My colleagues and I will continue to engage on these important topics and welcome opportunities to support, partner, and lead on elements of this urgent work.