Navigating AI Compliance, Part 1: Tracing Failure Patterns in History
Mariami Tkeshelashvili, Tiffany Saade
SUMMARY
History often rhymes and echoes through the present and future. Through this lens, we examine past compliance failures across various industries–from nuclear power to financial services–to illuminate potential pitfalls in the AI ecosystem, offering definitions, frameworks, and lessons learned to help AI builders and users navigate today’s complex compliance landscape.
Our analysis of eleven case studies from AI-adjacent industries reveals three distinct categories of failure: institutional, procedural, and performance. Institutional failures stem from a lack of executive commitment to create a culture of compliance, establish necessary policies, or empower success through the organizational structure, leading to foreseeable failures. Meanwhile, procedural failures are the result of a misalignment between an institution’s established policies and its internal procedures and staff training required to adhere to those policies. Finally, performance failure results from an employee’s failure to follow an established process, or an automated system’s failure to perform as intended, leading to an undesirable result.
By studying failures across sectors, we uncover critical lessons about risk assessment, safety protocols, and oversight mechanisms that can guide AI innovators in this era of rapid development. One of the most prominent risks is the tendency to prioritize rapid innovation and market dominance over safety. The case studies demonstrated a crucial need for transparency, robust third-party verification and evaluation, and comprehensive data governance practices, among other safety measures. Additionally, by investigating ongoing litigation against companies that deploy AI systems, we highlight the importance of proactively implementing measures that ensure safe, secure, and responsible AI development. Recent court cases teach a crucial lesson: compliance with privacy, anti-discrimination, and transparency laws must be foundational, not an afterthought.
Though today’s AI regulatory landscape remains fragmented, we identified five main sources of AI governance—laws and regulations, guidance, norms, standards, and organizational policies—to provide AI builders and users with a clear direction for the safe, secure, and responsible development of AI. In the absence of comprehensive, AI-focused federal legislation in the United States, we define compliance failure in the AI ecosystem as the failure to align with existing laws, government-issued guidance, globally accepted norms, standards, voluntary commitments, and organizational policies–whether publicly announced or confidential–that focus on responsible AI governance.
The report concludes by addressing AI’s unique compliance issues stemming from its ongoing evolution and complexity. Ambiguous AI safety definitions and the rapid pace of development challenge efforts to govern it and potentially even its adoption across regulated industries, while problems with interpretability hinder the development of compliance mechanisms, and AI agents blur the lines of liability in the automated world. As organizations face risks ranging from minor infractions to catastrophic failures that could ripple across sectors, the stakes for effective oversight grow higher. Without proper safeguards, we risk eroding public trust in AI and creating industry practices that favor speed over safety—ultimately affecting innovation and society far beyond the AI sector itself. As history teaches us, highly complex systems are prone to a wide array of failures. We must look to the past to learn from these failures and to avoid similar mistakes as we build the ever more powerful AI systems of the future.
download pdf