Autonomous Agents, Human Consequences: Key Insights from IST’s Workshop on AI Agents & Agency in the Internet Ecosystem

December 19, 2025

AI agents are shaping how decisions are made, how systems behave, and how humans navigate the digital world. To better understand the implications of this shift, IST hosted a closed-door workshop to explore the potential effects of AI agents on human agency.

AI agents are shaping how decisions are made, how systems behave, and how humans navigate the digital world. To better understand the implications of this shift, the Institute for Security and Technology (IST) recently hosted a closed-door workshop in Washington, D.C. to explore one of the most urgent—and least understood—questions confronting the digital ecosystem: what happens when autonomous AI agents operate with the reach, ubiquity, and speed of the Internet, and how does their presence reshape human agency? Bringing together legal experts, former government officials, technologists, and researchers, the conversation focused on three core challenges that will shape the next phase of agentic governance: identity and attribution; agency and responsibility; and the security implications of increasingly capable agents.

While participants’ views differed on how quickly this landscape is evolving, they aligned on one core insight: we are moving into a world where software acts with growing autonomy, and our ability to identify who is acting, on whose behalf, and with what intent remains deeply limited.

Identity and Attribution — A Traceability Crisis in the Making

The discussion opened with a foundational problem: as agents act with increasing autonomy, how do we accurately determine the source of a given action? Current authentication and logging practices are built on the premise that an identifiable human is carrying out each online action. But agents now operate across tools, platforms, and organizational boundaries, weaving together multi-agent workflows that can obscure the identity of the human or organization behind the activity. 

The result, participants warned, is a growing “traceability crisis.” Autonomous agents can generate actions without a stable identity, persistent provenance, or clear organizational affiliation—complicating investigations and accountability. While promising approaches exist, such as provenance metadata, artifact verification, and enterprise (or “chief of staff”) bots, they remain unevenly adopted, inconsistently implemented, and too brittle for fast-moving and hard-to-reverse actions. 

Determining identity and intent remains challenging even for humans, and the rise of autonomous agents compounds this issue. With industry consensus on “good practice” unlikely to emerge quickly, participants emphasized the need for pragmatic, interoperable minimum standards for authentication and accountability that can function across platforms and sectors. Over time, probabilistic attribution may become an essential tool for assessing identity and intent in increasingly complex, multi-agent environments.

Several participants also surfaced the importance of building “responsibility anchors” into agentic systems—credible identity markers, auditable chains of custody, and defensible assurance mechanisms that travel with an agent as it moves across systems. For the purposes of this article, we use the term “responsibility anchors” to refer to the technical and procedural controls that ensure an agent’s actions remain traceable, auditable, and attributable as it interacts with systems.

“The law has facets and precedents for handling novel issues, such as when a human is no longer behind the keyboard and responsibility cannot be cleanly mapped to a single actor,” said Microsoft’s Sean Farrell. “The big question is how the law will be applied.”

Agency & Responsibility — Assigning Risk in a Multi-Agent World

The second theme explored how responsibility and control shift as AI agents become more capable. As agents plan, use tools, store information, and learn and adapt from feedback, they change how decisions are made and how meaningful control is exercised in digital systems. Participants observed that current governance frameworks often focus on developers or deployers, prompting discussion about whether this framing may be too narrow. Responsibility must instead be shared across builders, integrators, platforms, and end-users, each of whom shapes the agent’s behavior in distinct ways. Importantly, both sides of any agent-mediated transaction matter: commerce platforms will gate acceptable agent interactions based on their liability exposure, while consumers will calibrate their adoption based on theirs. If platforms bear liability, they will constrain capabilities and require strict identity and telemetry. If customers bear it, adoption will be cautious. This bilateral dynamic means clarity on responsibility shapes the entire ecosystem.

This diffusion of responsibility highlights gaps in existing liability structures. Tort law, in particular, relies on concepts like foreseeability, proximate cause, and the “reasonable person” standard—concepts that could face strain under distributed, partially autonomous systems where human intent is only one factor among many. Some participants questioned whether current legal frameworks can adequately adapt to and address the range of harms that agents may cause. 

As a result, the group emphasized the value of pre-assigned responsibility, such as strict liability standards, which make explicit which actor is accountable for each segment of the agent lifecycle from model configuration to tool mediation to deployment. This clarity would allow each party to implement appropriate guardrails, price risk, and apply consistent controls. 

Security Implications — Moving From Model-Level Risks to System-Level Threats

Agentic security concerns consistently surfaced as the workshop’s most urgent through-line. Participants noted that agentic risks differ fundamentally from model risks, for which governance thinking is further along. Unlike static models, agents can access APIs, manipulate data, execute transactions, and trigger workflows—translating digital interactions into real-world effects, potentially with compounding consequences. A single compromised agent can escalate from benign errors to cascading failures across financial, operational, or civic systems. 

Yet organizations still rely heavily on model-centric guardrails, leaving system-level vulnerabilities unaddressed. Some participants stressed the importance of secure-by-design principles tailored to agentic workflows: scoped capabilities, per-run budgets, sandboxed tools, risk-informed defaults, and human sign-off for high-impact actions could make a difference. Effective threat modeling must map tools, memory, APIs, and prompt-injection paths before deployment—not in the aftermath of an incident.

Participants differed on whether sophisticated adversarial agents are already actively in use, but consensus emerged around the broader trajectory: the capability gap between attackers and defenders could be widening, and the blast radius of agent failures is growing. 

Where We Go Next

AI agents are rapidly reshaping the internet ecosystem, shifting more activity towards machine-to-machine interactions. That shift brings real opportunities for efficiency and innovation, but it also raises hard questions about identity, responsibility, and security. As adversaries begin probing how to hijack or impersonate agents, it’s clear these risks aren’t theoretical.

The challenge ahead isn’t only technical. It requires rethinking the basic assumptions of digital interaction: who is acting, on whose behalf, who bears responsibility, and how we maintain trust as agents move across systems. This workshop helped establish a shared baseline on these issues and outlined practical ideas for governance and safeguards that can guide near-term work.

Across all three pillars, participants surfaced several imperatives that will inform IST’s upcoming work, supported by Microsoft, which will engage in a study and create a primer for policymakers. 

  • Standards for agent identity, attribution, and provenance to improve traceability and trust.
  • Baseline safeguards for safe deployment, including least-privilege access, memory boundaries, monitoring, and kill switches.
  • Clearer accountability models across AI builders, deployers, and users to better align liability with responsibility.

Taken together, these actions offer early anchors for governance in an agent-rich ecosystem and a path to ensuring AI agents strengthen, rather than undermine, security and trust online. And as IST continues this work, we invite partners, practitioners, and policymakers to engage with us by contributing insights, sharing challenges, and helping shape the frameworks that will govern an agent-driven future.

Related Content