Submit to Digest

AI on Our Terms

Commentary Privacy

(The above image was AI-generated by GPT-4o.)

Kevin Frazier is an AI Innovation and Law Fellow with the University of Texas at Austin School of Law.


Introduction

The impending wave of AI agents represents the most significant shift in consumer technology since the iPhone’s 2007 release revolutionized personal computing. These digital assistants will transform how we interact with technology, businesses, and even each other. Soon, life without an AI agent will become as unthinkable as functioning without a smartphone today. No reasonable person will waste time booking their own travel, sorting through their emails, or managing their calendar when an AI agent can autonomously perform these tasks with greater efficiency and precision. Take a look at your unread emails, and you’ll see why nearly everyone will soon be shopping for an agent.

This technological shift presents a chance to reform the flawed terms-of-service ecosystem that has long hindered digital relationships.

The Current Terms of Service Dystopia

Today's terms of service are a market failure. They make a mockery of informed consent. Reading the terms for everyday products like Microsoft Teams would take the average American many hours. Legal scholar Daniel Solove calls this a “fiction of consent,” as users have no practical choice but to blindly accept whatever terms companies offer. This system’s current characteristics are inadequate for an AI-agent ecosystem.

Impenetrable legalese dominates these agreements. A 2019 study found major technology platforms write their terms of service using language suited for an academic, not the average adult. Terms of service employ dense legal terminology, convoluted sentence structures, and abstract concepts designed to obscure rather than inform. Everyday readers struggle to make sense of the terms (assuming they take the time to read through them!).

The presentation format itself also discourages engagement and comprehension. Terms often appear in tiny fonts, in low-contrast text, buried behind multiple clicks, and formatted as uninterrupted walls of text without meaningful organization or emphasis on critical provisions. Worse still, many platforms use dark patterns to make acceptance effortless and refusal difficult.

The timing of terms presentation further undermines meaningful choice. Users typically encounter these agreements after they have already invested time and money creating accounts, downloading software, and/or purchasing devices. At this late stage, refusal is psychologically and practically costly. Users must either accept all terms or get nothing. Partial acceptance is rarely an option.

Perhaps most troublingly, companies retain unilateral amendment powers that render any initial agreement essentially meaningless. Many terms of service include provisions allowing companies to change terms at will with minimal notice requirements, and continued use constitutes acceptance. These sorts of "quiet changes" to privacy practices have drawn the attention of the Federal Trade Commission. Recent FTC guidance warned that companies cannot retroactively alter their privacy policies to further their business interests. Instead, they must first inform consumers before making those changes. Even with that guidance in place, however, companies are generally free to notify users of those changes in what often seems to be inconvenient formats and times. It’s no surprise that many of us can recall receiving a Friday afternoon email about such changes, only to quickly send that email to the trash folder. In effect, this scheme allows companies to adopt new privacy policies as they see fit. This one-sided arrangement allows companies to secure initial user bases with more favorable terms before gradually eroding user rights through subsequent modifications.

The Data Appetite of AI Agents

AI agents represent an unprecedented expansion in personal data collection and use. Unlike passive applications, AI agents function as persistent digital companions that continuously monitor, learn from, and adapt to user behavior. Their effectiveness depends on expansive data access. This reality creates both opportunities and dangers that current terms-of-service frameworks are ill-suited to address.

These agents will necessarily collect and store detailed records of user preferences, habits, and personal information. An effective travel agent will remember your seating preferences, frequent destinations, loyalty program memberships, and even whom you prefer to travel with. A calendar assistant will know your schedule, relationships, and priorities. An email manager will learn your communication style, important contacts, and response patterns.

Beyond conscious preferences, AI agents will build vast repositories of implied preferences through behavioral observation. They will note which emails you open first, how quickly you respond to different contacts, when you prefer to do certain activities, and countless other patterns that users themselves may not consciously recognize. This passive data collection creates an intimate profile far more comprehensive than what users might actively provide. Some users may simply view this as an intrusion that’s worth the loss in privacy. But those same folks may question that “bargain” if and when that profile is leaked or purchased by a third party.

The long-term storage of personal memories within agent systems also raises privacy concerns. Unlike discrete transactions or ephemeral searches, agent-collected data builds a longitudinal record that becomes more valuable and more sensitive over time. Early interactions establish baseline preferences that inform all subsequent agent behaviors. This creates powerful lock-in effects where switching providers means abandoning years of personalized learning and starting over with a new system.

Most concerning, these agents will serve as central repositories for data dispersed across multiple services. While a user might previously have maintained separate profiles for travel services, email providers, social media platforms, and productivity tools, an effective AI agent consolidates these data streams into a unified system. This concentration raises unprecedented privacy and security risks that users must understand before adoption.

Essential Terms of Service Requiring Meaningful Consumer Choice

Before AI agents become as ubiquitous as the iPhone, it is necessary to think about the content, timing, and format of terms of service that consumers should expect from companies.

Comprehensive Data Collection and Processing Terms

Users need clarity about what data their AI agents will gather. Terms must explain whether agents passively monitor conversations, calendar entries, location, browsing history, or other personal data sources. The scope, frequency, and invasiveness of this monitoring should be clearly delineated, not hidden behind vague phrases like “service improvement.”

Terms must clearly cover how AI agents process personal memories. Users should understand how the system identifies, stores, and utilizes memories of past interactions, preferences, and behaviors. Terms should explain how long memories are stored, whether users can delete specific memories, and how memory systems distinguish casual preferences from firm requirements.

How user data is used to train other AI systems must be clear. Users need to know whether user interactions contribute to company-wide model training, what anonymization processes protect user privacy in these scenarios, and whether users can opt out of such contributions without losing essential functionality. The practice of burying training provisions deep within privacy policies must also end.

Finally, third-party data sharing arrangements demand heightened scrutiny. Again, users deserve comprehensive disclosure of which external entities receive their data, for what specific purposes, and with what limitations. Categories like “trusted partners” or “service providers” are too opaque and vague. Terms should identify who is getting what data, why, and how long they can keep it. Some parties may not have users’ best interests in mind when they get that detailed, sensitive profile reflecting one’s daily decisions over the past several years.

Agency and Authority Parameters

Unlike passive software, AI agents will increasingly act on their own. Publicly available AI agents have yet to become broadly accessible. That said, demonstrations of AI agent products currently under development, such as OpenAI’s Operator and Deep Research, make their autonomous capabilities clear.

This agency raises novel questions that terms of service must address. Users need clear parameters about what actions their agents can take without specific approval. Can an agent book travel, purchase products, or respond to messages autonomously? What financial limits constrain these activities? What verification steps must precede consequential actions?

Terms must allocate liability for agent-initiated activities. If an agent books non-refundable travel based on a misunderstood preference, who bears the financial responsibility? If an agent sends an inappropriate message to a professional contact, how is reputational damage addressed? These scenarios require resolution mechanisms specified in advance, not improvised after the fact. Who among us would like to wake up to an out-of-the-blue first-class ticket from LA to NYC only because your agent thought you needed to catch some sleep over the flight?

The delegation boundaries between user and agent should be set forth in detailed yet understandable language. Terms should specify whether users can restrict agent activities to specific domains, whether certain high-stakes actions always require explicit approval, and what override mechanisms exist for users when agents exceed their authority. These provisions should include concrete examples illustrating boundary cases rather than abstract principles. For example, terms could note that when tasked with booking a flight, your agent should first receive your maximum budget.

Surveillance and Intervention Framework

AI agents will always be “on.” As pointed out by OpenAI, once its Deep Research tool is given a task, it will be working behind the scenes to accomplish whatever end a user assigned to it. For now, OpenAI suspects that Deep Research will accomplish most of its user-assigned tasks within five to thirty minutes. However, in the near future, users will likely have systems of agents working together on much lengthier and more substantive projects, such as planning a new product launch or buying a home. Full use of such agent systems hinges on the system’s ability to learn about and store the user’s preferences, financial status, and health. IBM's Cole Stryker explains, “AI agents with memory can retain context, recognize patterns over time and adapt based on past interactions. This capability is essential for goal-oriented AI applications, where feedback loops, knowledge bases and adaptive learning are required.”

Memory-driven AI agents necessitate clear terms covering when and how they monitor user activities. Terms should specify whether agents listen continuously to us for wake words (e.g., “Siri” or “Alexa”), whether they process ambient conversations for context (something your friends would surely like to know), and whether they analyze activity across devices. Users deserve to know what stays private—even from their agents.

Company-side surveillance represents another area requiring disclosure. Terms should state whether human employees review agent-user interactions, what conditions prompt such review, and what anonymization protections apply. The practice of burying human review disclosures in technical documentation should also be a vestige of Web 2.0 terms.

Finally, defining when a company can intervene in user interactions and alter user preferences raises critical questions for terms of service agreements. Under what circumstances can the providing company override user settings or modify agent behavior? What security concerns, legal requirements, or policy violations might trigger such interventions? What notification requirements apply when interventions occur? These questions demand explicit, detailed answers in binding terms.

Customization and Portability Rights

The personal nature of AI agent relationships raises questions about user modification rights. Terms should establish whether users can adjust underlying algorithms, fine-tune agent behavior, or restrict response patterns. Companies should delineate the boundaries between permitted customization and prohibited manipulation so that consumers are not once again finding themselves searching how to jailbreak an app.

As users invest time personalizing their agents, portability concerns become increasingly significant. Terms should address whether users can transfer their personalized agent profile to another provider, what technical formats facilitate such transfers, and what elements of personalization remain proprietary to the original provider. The risk of “data hostage situations”—where years of personalization make switching prohibitive—deserves regulatory attention. Thankfully, Tennessee Attorney General Jonathan Skrmetti has already identified and flagged this issue. Let’s hope others follow his lead.

Interoperability with competing services similarly requires clear terms. Can an agent book travel through multiple competing platforms? Can it access data from rival productivity suites? What limitations do proprietary interfaces impose on cross-platform functionality? These questions directly impact the practical utility of agents and likewise warrant prominent disclosure. Consumers selecting an AI agent should not find themselves inadvertently selecting a slew of related products as well. Consumers should have the means to direct their agent to preferred vendors for specific services.

Reforming Delivery: From Wall of Text to Meaningful Choice

The extensive requirements outlined above might seem overwhelming, potentially leading to the same ‘click-through fatigue’ that plagues current terms of service presentations. The approach described below aims to avoid this pitfall by fundamentally redesigning how terms are delivered, making meaningful engagement practical rather than theoretical through layered information, visual tools, and thoughtful decision architecture.

Tiered and Contextual Disclosure

Terms should appear in progressive layers of detail tailored to user needs and contexts. The primary tier should provide a standardized, one-page summary of key provisions using consistent formatting across providers. This summary should use plain language at an eighth-grade reading level, use bullet points for clarity, and highlight terms that deviate from industry norms or user expectations.

The secondary tier should offer plain-language explanations illustrating real-world implications. Rather than abstract statements about data collection, this tier should demonstrate how specific user activities translate into stored data elements. Instead of vague authority provisions, it should walk through scenarios showing when agents can act independently versus when they require user approval.

The tertiary tier can provide the complete legal text for reference. This comprehensive layer serves important due diligence and regulatory compliance functions while ensuring legal enforceability. However, unlike current practice, this layer should not represent the primary or only presentation of terms. Its existence should supplement rather than replace more accessible formats. That said, the exact legal language would still impose binding legal terms on consumers.

How best to ensure that primary and secondary tiers reflect the binding legal language will be a challenge. Companies, regulators, and civil society groups can refine their approaches over time. At a minimum, these stakeholders should prioritize ensuring that the provisions most important to users or with the greatest impact on their use of the agent accurately convey the legal implications at each level.

Choice Architecture Reform

The structure of choices matters as much as their content. The current binary approach—accept everything or use nothing—should give way to genuinely granular options. Users should be able to accept core functionality while rejecting enhanced features with greater privacy implications. They should be able to authorize specific categories of action while restricting others. (You may never want your agent to contact your mom, for example—some communications are best left to humans.)

Default settings also merit reform. Current practice typically selects the most company-favorable options by default, requiring active user intervention to protect privacy or restrict data use. This approach exploits well-documented status quo bias in decision-making. Reformed terms should establish neutral defaults that balance user privacy with system functionality, reserving opt-in requirements for especially invasive features.

Temporal Considerations: When Should Terms Be Revisited?

One final reform: terms should be revisited at logical junctures in the user-agent relationship rather than at random times dictated solely by the company, as is the case through late-night emails with privacy policy changes. Capability expansions represent natural review opportunities—when an agent gains the ability to manage finances in addition to scheduling, users should consider the implications of this expanded authority. Usage pattern shifts similarly justify review—when an agent begins managing professional communications after handling only personal ones, terms should be reconsidered. Data threshold crossings provide another natural review point. When the volume or sensitivity of collected data crosses certain benchmarks, users should receive notification and reconsideration opportunities. These thresholds might include time-based measures (six months of continuous use), volume-based metrics (1,000 stored preferences), or sensitivity escalations (addition of financial data to previously lifestyle-focused profiles).

Ecosystem integration events—where agents connect to new services or environments—also represent important review opportunities. When an email-focused agent gains access to shopping accounts, health records, or home control systems, the implications change substantially because of the range of possible tasks they may take on with little to no oversight. Terms review should precede rather than follow such expansions.

Conclusion: Seizing the Transformative Moment

The imminent AI agent revolution provides a rare opportunity to correct fundamental flaws in our digital contracting system. Rather than repeating the smartphone era’s mistake of establishing consumer relationships on deceptive foundations, we can create a new paradigm where terms of service become tools for informed choice rather than instruments of exploitation. The intimate nature of AI agent relationships—with their unprecedented access to personal information, memories, and decision-making authority—demands nothing less.

This transformation requires multi-stakeholder cooperation: companies willing to experiment with new models, regulators establishing baseline requirements, courts recognizing substantive rather than formal consent, and consumers insisting on being treated with respect. The legal profession, in particular, must abandon the practice of crafting deliberately obfuscatory terms and instead apply its expertise to creating genuinely understandable agreements.

The stakes could not be higher. As AI agents become our digital representatives, interpreters, and assistants, the terms governing these relationships will fundamentally shape the next era of digital life. Now is the time to ensure those terms serve both innovation and human autonomy, creating a technological future where convenience does not come at the cost of comprehension and control.