AI Antitrust and National Security

Exploring how to more effectively account for national security considerations in AI antitrust cases while respecting precedent, scope, and the core principles of antitrust law

In contrast to prior defense technologies, the research and development of artificial intelligence, including work with growing national security applications, is being driven by a small number of private sector firms. As these technologies scale and find use in security and defense contexts, antitrust scrutiny aimed at maintaining competitive markets may increasingly intersect with national security priorities. 

In this environment, familiar antitrust remedies such as breakups, interoperability mandates, or data-sharing requirements, may have unintended consequences if they inadvertently disrupt the systems or oversight processes critical to national security. This is especially relevant considering AI’s distinctive risks of emergent autonomy, rapid scalability, and dual-use potential across civilian and defense domains.

Therefore, although antitrust law is neither intended nor appropriate as a vehicle for addressing national security, the remedies applied to AI labs can nonetheless have unintended security consequences.

The Institute for Security and Technology (IST) is leading the charge on antitrust research, asking:

  • What does a principled antitrust remedy framework for AI labs look like—one that respects established legal precedents while addressing the distinctive national security risks of advanced AI systems?
  • What practical considerations should shape remedy design, and when should national security be a deciding factor? What are the most feasible paths forward?

This is a forward-looking effort to anticipate how legal, technological, and national security discussions around AI may evolve. Rather than seeking to alter the standards for establishing antitrust liability, our focus is on remedies in successful cases, and how such remedies may be more systematically organized to preserve competition while acknowledging points of intersection with security concerns. 

The project does not take a position on optimal AI market structure, nor does it suggest that firms with national security relevance should be exempt from scrutiny. It also does not propose that security should override economic concerns. Instead, this is an exploratory exercise aimed at understanding where remedies and security considerations overlap, and how these implications might be navigated responsibly.

Recent Content

AI Antitrust and National Security Team

Philip Reiner

Chief Executive Officer

Steven M. Kelly

Chief Trust Officer

Gabrielle Tran

Senior Associate for Technology and Society