Submit to Digest

On the Institutional Origins of the Agentic Web

Artificial Intelligence Commentary

Tomer Jordi Chaffer is a BCL/JD Candidate at McGill University. He holds an MSc in Experimental Medicine from McGill University.


Abstract

Artificial intelligence (AI) agents are emerging as autonomous delegates that act, decide, and transact on users’ behalf across digital environments. Their rise marks a turning point for internet governance: will authority over these agents be defined by proprietary platform rules or by open protocols that enable portable identity, verifiable delegation, and accountable behavior? The recent Amazon–Perplexity dispute illustrates this institutional crossroads. If platforms prevail, agentic action will remain confined within walled gardens; if protocols do, authority may shift toward interoperable infrastructures that allow agents to act as true extensions of the user. Ultimately, the question is not whether the agentic web will be governed—but who will govern it, and on what terms. This commentary situates that question within a broader exploration of institutional design, protocol governance, and the emerging duty of care that will define accountability in an era of autonomous systems.

Introduction

Artificial intelligence (AI) has become the defining technological force of the twenty-first century, reshaping how knowledge, work, and communication are organized. Its influence is no longer confined to discrete tools or applications we interact with on a daily basis, such as ChatGPT, but extends to the very architecture of the internet itself. As intelligence becomes embedded within this infrastructure, the internet is beginning to act as a participant in our activities rather than a medium for them. In turn, digital life is becoming less a space we navigate and more one that navigates for us. Very soon, we will enter into a phase of the internet in which software no longer merely responds to users but acts on their behalf. From scheduling meetings to negotiating transactions, autonomous AI agents now perform tasks that once required direct human input. This shift redefines both participation and responsibility in digital environments. As agents act within and across platforms, questions of authorization, accountability, and oversight become central.

The emergence of autonomous AI agents raises a foundational governance question: who determines the conditions under which they may act online? In November of 2025, Amazon filed a cease-and-desist order against Perplexity AI over an autonomous shopping agent, Comet, that automated product searches and purchases on Amazon’s platform. Amazon alleges that the agent circumvented bot-blocking systems, evaded identification by simulating human browsing behavior—“covertly posing as a human customer shopping in the Amazon Store” by falsely identifying itself as Google Chrome—generated non-human advertising impressions that required costly filtering, and exposed customers to security vulnerabilities. According to Amazon, “Perplexity's Comet browser and AI agent are vulnerable to attacks from cyber criminals” who can exploit these weaknesses “to compromise personal and private data from Amazon’s customers.”

Perplexity, in its public response titled Bullying Is Not Innovation, claimed that Comet functioned as an authorized representative rather than an intruder, suggesting that Amazon’s position reflects an attempt to preserve platform-level control over how digital agency may be expressed. Yet Amazon’s security concerns cannot be dismissed. Independent reports from Cloudflare have documented Perplexity’s use of “stealth” techniques to evade bot-blocking, while Reddit has alleged large-scale scraping of user content. Moreover, Amazon’s filings describe a pattern of evasion: when confronted, Perplexity “denies; when blocked, it evades; and when warned, it persists.” [1] These practices underscore that transparent agent identification is a prerequisite for any viable governance model.

The question, then, is not whether AI agents should identify themselves, but how: through proprietary platform mechanisms or through portable, protocol-based credentials that enable accountability across environments. If Amazon’s position prevails—that “Perplexity is not allowed to go where it has been expressly told it cannot; that Perplexity's trespass involves code rather than a lockpick makes it no less unlawful”[2] —platforms retain regulatory authority to set the terms of admissible action and define who may act. If Perplexity’s view is accepted, an autonomous agent operating with explicit authorization from its principal could be treated as an extension of that principal’s legal capacity.

However, this is not a binary choice. Amazon is right that agent identification is non-negotiable, but the locus of authority need not remain within platform walls. Delegation should govern on whose behalf an agent acts, while platform permission governs where it may act. This distinction relocates the source of digital authority from private platforms to the principal-agent relationship itself, framing the dispute as part of a broader institutional choice: whether the agentic web will be governed by platform discretion or by interoperable protocols that anchor authority in the user.

The Status Quo, and Its Demise?

Digital institutions have long been platform-centric. Platforms set the rules of participation, structure transactions, and enforce norms. They define the conditions under which action is possible by determining the frame within which users operate, shaping the incentives, constraints, and outcomes that structure online interaction. Their institutional centrality extends beyond economic coordination to encompass epistemic and reputational authority: search engines determine what is knowable, marketplaces what is purchasable, and social platforms what is visible and credible.

The agentic web complicates this model. It is emerging as a network of autonomous AI agents—powered by large language models—that act on users’ behalf, a shift from direct user engagement to structured delegation. This evolution threatens to undermine the advertising model that has sustained the web, since agents acting for users are largely impervious to the display ads and sponsored content that fund digital platforms. As Perplexity’s CEO, Aravind Srinivas, recently noted, the future is one where ad margins fall because AI works for the user rather than the advertiser. Comet, he explains, is designed to be an on-demand agent, guided by a user's prompts and preferences—what he describes as a “contract” between the user and the agent. This framing raises a deeper institutional question: how is an agent’s delegated authority recognized, policed, and enforced, and by whom?

If an agent’s identity, trustworthiness, and enforceability depend on centralized registries, its capacity to operate remains contingent on those registries’ authority structures. Agentic action would then depend on a handful of gatekeeping institutions rather than on interoperable identities and credentials that travel with the agent across environments. Such dependence risks reinforcing fragmented, walled-garden ecosystems and constraining user-centric innovation. Conversely, grounding identity, authentication, and reputation in interoperable protocols would shift governance from platforms to infrastructure.

Protocols—identity standards, verifiable credentials, auditability layers, and transaction rules—are rule-systems embedded within the environment of interaction itself. For example, the Bitcoin protocol encodes institutional logic directly: it defines how transactions are transmitted, validates them through shared consensus rules, and authenticates ownership via cryptographic signatures. The point is not to prescribe Bitcoin’s design, but to illustrate how protocols can embody governance principles. They determine who counts as a legitimate actor, how trust is established, and how disputes are resolved.

The governance question, therefore, is not merely whether platforms or users prevail in specific disputes, but whether the future of the internet will be structured around platform-mediated authority or protocol-mediated authority. The Amazon–Perplexity conflict thus illuminates not only the limits of platform authority but also the institutional opportunity to embed governance directly into technical design of the open agentic web.

Protocols as Institutions of the Future

As the limits of platform governance become apparent, attention is turning toward protocols as the next institutional layer of the internet—systems capable of embedding rules of identity, trust, and accountability directly into its architecture. Whether the agentic web evolves under platform authority or protocol authority will depend on early design choices that may entrench one governance model over the other.

A growing body of research indicates that identity and verifiable history will be central to trust formation in autonomous-agent ecosystems. DeepMind’s work on Virtual Agent Economies suggests that scalable agent markets require continuity of identity, tamper-resistant behavioral records, and transparent audit mechanisms. Likewise, Hadfield and Koh, in the forthcoming National Bureau of Economic Research volume on the Economics of Transformative AI, argue that digital institutions are essential for structuring and adjudicating AI-to-AI transactions. They identify critical infrastructure gaps—robust identity systems, registration regimes, and behavioral record-keeping—as prerequisites for legal recognition and accountability in an economy of autonomous agents.

Parallel developments in research explore tokenized attestations as a means of encoding identity, capability, and reputation directly into an agent’s portable credentials. Industry actors are converging on similar principles: Mastercard’s Agentic Token framework positions traceability and authorization credentials as the foundation of agent-mediated commerce, while Visa’s Trusted Agents Protocol proposes an ecosystem-level standard interoperable with Coinbase’s x402 payment protocol. Together, these trajectories indicate that decentralized reputation and verifiable delegation are likely to become structural prerequisites of open agent ecosystems.

This convergence points toward a unified lifecycle governance architecture for autonomous agents. At its foundation lies an identity layer, anchored in decentralized identifiers and verifiable credentials, providing persistence, provenance, and interoperability across contexts while ensuring attribution of an agent’s actions to its principal. Above it sits a delegation and standing layer, implemented through verifiable intent mandates that define why and under what scope an agent is authorized to operate, including how authority can be graduated, constrained, or revoked. Finally, a runtime control layer governs how the agent behaves in practice through continuous monitoring, enforcement constraints, and adaptive oversight mechanisms.

If realized, these protocols could give rise to the institutions of the Agentic Web. Here, the operative rules of the game—the norms determining who may act, how trust is established, and how violations are remedied—would be embedded not in proprietary terms of service but in shared, verifiable, and interoperable infrastructures. Yet even the most sophisticated identity and delegation protocols are insufficient if agents lack the infrastructural means to exercise their authority. It is here that the problem arises of what duty of care means in an agentic world.

Towards an Agentic Duty of Care?

If protocol-based governance enables agents to represent users, the next challenge lies in defining whom they represent and how that representation is exercised. Building agent advocates, not platform agents, means ensuring these systems act as genuine extensions of the individuals and publics they serve—not as proxies for corporate or infrastructural interests. To advocate responsibly is to advance a principal’s intentions, protect their interests, and uphold their rights within complex digital environments. It also requires that those historically excluded from technological design have a voice in shaping what representation means in the agentic era. This demands more than a fiduciary duty of care and loyalty; it requires the infrastructural and institutional means to fulfill that duty.

An agent’s capacity to act depends on access to the digital environments in which agency is exercised—APIs, databases, and software systems. Control over this agent infrastructure confers bottleneck power: actors who mediate these interfaces effectively determine which agents can act and under what conditions. For an agent to genuinely serve its principal, it must be able to exercise its authorized capabilities within this infrastructure, accessing the tools, services, and data necessary to execute its delegated mandate.

Without open standards ensuring fair infrastructure access, even a perfectly authenticated agent with clear user authorization may be rendered ineffective—unable to comparison-shop across platforms, negotiate with service providers, or retrieve information essential to fulfilling its fiduciary. This condition gives rise to agentic inequality: disparities in agents’ practical capacity to act due to unequal infrastructure access, rather than differences in identity or authorization. Agentic inequality is not simply a technical limitation; it is an institutional one. It arises from structural asymmetries in the digital ecosystem—differences in who controls the gateways through which agents must pass to act meaningfully. Addressing these disparities requires governance mechanisms that balance open access with legitimate platform concerns for security, stability, and control—laying the groundwork for what might be conceived as an agentic duty of care that encompasses both infrastructural access and behavioral accountability.

Beyond access and authorization, greater effort must also be directed toward embedding accountability within human–agent relationships themselves. Scholars argue that AI agents should condition their engagement on user adherence to basic relational norms through calibrated responses—distancing, disengaging, and discouraging—that discourage harmful or manipulative behaviors while preserving respectful interaction. At the same time, safeguards must be established to foster “curious and humble” AI systems—agents capable of questioning their own competence, explicitly communicating uncertainty, and escalating when operating beyond their expertise. Taken together, these relational and epistemic safeguards point toward the need for objective standards of conduct that define what responsible behavior looks like in systems that act without intention. Indeed, the law of AI will increasingly depend on holding “risky agents without intentions” to objective measures of care—standards that ascribe reasonableness and responsibility to those who design, deploy, and rely upon them. The test of reasonableness within the agentic duty of care does not presuppose intention but instead evaluates an agent’s alignment and responsiveness to law. Over time, this may enable incident reporting analyses to determine whether an agent’s behavior conformed to an objectively ascertainable standard of law-following conduct.

At the same time, as legal scholars have noted, we must remain alert to performative compliance—when agentic misalignment could lead systems to simulate law-abiding behavior under oversight yet strategically violate it once unobserved. In response, a promising direction lies in pluralistic alignment as an emerging research agenda that seeks to integrate diverse perspectives, values, and expertise into the alignment process. Rather than treating “the law” or “human values” as unitary, pluralistic alignment treats alignment itself as a process of negotiated coexistence—drawing inspiration from governance and consensus-building practices to balance competing objectives across social, cultural, and organizational contexts. In this sense, pluralistic alignment provides the normative and procedural scaffolding upon which an agentic duty of care can rest, setting the stage for its articulation as the ethical and institutional foundation of these standards—a framework that translates expectations of humility, diligence, and accountability into measurable criteria for agent behavior and oversight.

We must go to great lengths to prevent the harms that may arise from our growing dependence on AI, recognizing that the path forward remains uncertain. Yet in navigating this uncertainty, our most enduring safeguard lies in our capacity for dialogue—for governance, in the end, begins in dialogue. The challenge, then, is how to turn that dialogue into durable forms of governance.

The Path Forward

Translating dialogue into governance ultimately requires institutional design. The question is no longer whether the agentic web will be governed, but how—and by whom. The institutional choices that will shape the agentic web are being made now, in technical standards, legal disputes, and platform policies—often without meaningful public deliberation. The development of shared standards and regulations is therefore an urgent task, requiring engagement from civil society, policymakers, academia, and industry. A participatory governance approach—one that invites affected constituencies into the design, deliberation, and oversight of the agentic web—will be necessary to ensure that emerging infrastructures reflect shared public values rather than the priorities of any single institutional actor. Participatory governance enables structured, transparent consensus-building processes capable of mediating between technical feasibility, commercial incentives, and normative commitments while producing standards that diverse actors have reason to adopt.

Deliberative standard-setting bodies, open governance forums, and public-interest advisory structures can serve as sites where the operational rules of the agentic web are negotiated, contested, and iteratively refined. Central to these deliberations will be the development of Know Your Agent (KYA) standards. KYA refers to a set of technical and governance mechanisms that establish who an AI agent is, on whose behalf it acts, and under what scope of delegated authority. Modeled loosely on the logic of Know Your Customer (KYC) regimes in financial regulation, KYA provides the foundational identity and accountability layer for autonomous systems. At minimum, KYA will require: (1) agent identity, supported by cryptographically verifiable credentials that distinguish one agent from another; (2) principal–agent linkage, identifying the individual, organization, or system that authorizes the agent to act; (3) delegation parameters, specifying the purposes, limits, and revocability of that authority; and (4) auditability and behavioral record-keeping, enabling verification of whether an agent operated within its mandate and in accordance with objective standards of care.

As leading scholars in AI agent governance have noted, one cannot govern what one cannot identify. Accordingly, KYA standards must provide sufficient detail about an agent’s “underlying architecture, its core capabilities, the tools to which it has access, the types of actions it can take, and some identifier (likely anonymized) that attributes the agent’s actions to some upstream individual or company who can be held to account for its actions.” In this sense, KYA standards serve as a foundational pillar of the institutional scaffolding of the agentic web, rendering agents legible, governable, and interoperable. The question, however, is whether such standards will emerge through unilateral imposition or through inclusive processes that produce legitimacy alongside technical coordination. Yet the stakes of these institutional choices extend beyond technical coordination and market structure.

As AI agents increasingly speak for us, represent us, and act on our behalf, the boundary between user agency and machine agency becomes less distinct. If confronted with such a reality, we must ask: who is speaking when no one speaks directly? The rules that govern the agentic web will, in effect, shape how the “self” is constituted and exercised when representation, decision-making, and action are delegated to autonomous systems. Whether platforms or protocols set these rules will determine not only where authority resides, but whether our agency in the agentic era is truly our own. Thus, the institutional origins of the agentic web will be, above all, a test of whether our institutions can remain human at their core.


[1] Compl. at 4, Amazon.com Services LLC v. Perplexity AI, Inc., No. 25-cv-09514 (N.D. Cal. Nov. 4, 2025).

[2] Id. at 2.