person holding a magnifying glass close to their face, enlarging the size of their glasses
Feature

AI Terms of Service Hope You Don't Read the Fine Print

5 minute read
David Barry avatar
By
SAVED
AI companies market powerful tools, but at the same time their terms shift most of the risk to users if anything goes wrong.

The terms of service that OpenAI, Microsoft, Google and Anthropic ship with their products have something in common: They aren’t responsible for how users use or act on what the AI produces.

Every major enterprise AI provider disclaims warranties on output accuracy, excludes consequential losses and typically caps damages at the preceding 12 months' fees. OpenAI supplies its services "as is." Microsoft’s terms say users are responsible for how they use Copilot outputs, and Microsoft does not guarantee the outputs are accurate or suitable for decision-making. The wording varies but the intent is similar.

Researchers identified this same pattern in a comparative analysis of commercial AI vendor contracts, with major providers converging on warranty disclaimers, liability caps and user responsibility for outputs. The question isn't if this approach will break in the enterprise, it's when.

Why AI Terms of Service Were Written This Way

The standard disclaimer model made commercial sense when a human reviewed every output before acting on it. What changed is the speed. These products reached enterprise buyers before courts, regulators or procurement teams were ready. Providers wrote terms that suited them. The result is not a principled allocation of risk.

Providers have written a liability allocation that suits them, not a defensible legal position, said Mark Khater, head of the Centre for Strategy and Performance at the University of Cambridge, who has been developing AI systems since 1994 and has advised governments, central banks and sovereign wealth funds on AI policy.

"The disclaimer model was designed for a world where software is a tool and the operator controls what comes out," Khater said. "Agentic AI breaks that logic entirely." When a system initiates transactions, drafts communications or makes sequential decisions without a human reviewing each step, the paradigm that the user is running the tool and therefore owns the outcome does not hold.

The AI industry's defense — that outputs are probabilistic and cannot be guaranteed — has some force for a consumer chatbot. It has far less force when the same model is sold, with minor modifications and a premium price tag, as enterprise-grade infrastructure for consequential decisions. Courts have not yet tested these terms against specific facts. When they do, providers may find the gap between what their contracts say and what the law requires is larger than their legal teams have publicly acknowledged.

SaaS Liabilities Don’t Fit

AI liability terms come from the standard enterprise SaaS playbook: capped damages, disclaimers of warranty, exclusions of consequential damages, according to Jason Barnwell, chief legal officer at Agiloft. Output disclaimers are more explicit because outputs are probabilistic, but the underlying structure is the same contract enterprises have accepted for decades from conventional software vendors.

The problem is that conventional software is deterministic. You put in X, you get Y. When it breaks, you can identify where it broke. "AI doesn't work that way,” said Kevin Williams, founder and CEO of Ascend AI Labs. “It's probabilistic. It's going to be wrong sometimes in ways you can't predict, and the providers have written their terms to reflect exactly that reality."

A consistent pattern Williams sees in his advisory work is enterprises deploying Copilot or ChatGPT to hundreds of employees using roughly the same diligence they would apply to signing up for a newsletter. It is a mismatch between how fast these products reached the market and how long it takes institutional governance structures to catch up.

"We will see what the market bears, but probably, yes, many agentic offerings will be made available with terms that shift most of the risk to customers, for a time,” Barnwell said. Gross negligence and fraud remain non-waivable on both sides of the Atlantic, but those are the floor. Most enterprise AI contracts sit nowhere near the ceiling of what providers could reasonably concede.

Agentic AI Is the Real Stress Test

The liability question is different when the AI system is completing tasks autonomously on a human's behalf.

"The liability frameworks we have were designed for a world where a human made the final call," Williams said. "We're moving pretty fast into a world where they don't." When an agent executes a transaction, sends a communication or makes a series of sequential decisions without a human checkpoint, the output is the action. Under current terms of service, that action is the user's responsibility.

Enterprises can’t treat agent governance as a future problem, said Cobus Greyling, chief AI evangelist at Kore.ai. "The regulatory environment has removed that option," he said. For organizations operating across industries, it’s worse, because each jurisdiction brings its own discrete governance structure and liability accumulates in the gaps between them.

Disclosure is where the legal reckoning will come, Khater said. "What the provider knew about the system's limitations and when is where the real exposure will eventually be litigated," he said.

Consider a financial institution that suffers material loss because an shell executed a transaction on a flawed inference. If the institution can demonstrate that the provider knew about specific problems and did not disclose them, it is in a materially different legal position from the standard terms of service. That exposure is not yet priced into the standard contract.

Who Rewrites the Terms, and When?

There is broad agreement that the current framework will not hold indefinitely. There is far less agreement on what breaks it first.

Regulatory intervention is the obvious candidate but probably not the fastest mechanism. The EU AI Act establishes compliance obligations for high-risk AI systems but does not create private rights of action for buyers harmed by non-compliant systems. Enforcement runs through national market surveillance authorities, not through contractual remedies calibrated to actual losses.

In Khater's view, the insurance industry will move faster: As enterprise buyers seek coverage for AI-related losses, underwriters will demand evidence of contractual risk-sharing between buyer and provider. Providers that refuse will find their enterprise customers unable to obtain adequate coverage.

For Williams, the catalyst is simpler: the first major court verdict. Organizations that survive it will be the ones that built audit trails proving a real person reviewed the output before action was taken.

For enterprise buyers who cannot wait, Khater identifies three aspects worth negotiating now:

  1. An accuracy warranty calibrated to the specific use case
  2. a partial indemnification provision for losses from material inaccuracies in high-stakes outputs
  3. a transparency obligation covering known problems, with ongoing disclosure as new ones emerge

Large financial institutions have the procurement power to demand all three. Most are not using it.

Learning Opportunities

Terms of service that govern AI in the enterprise were written before autonomous agents existed at scale, before courts had weighed in and before buyers understood what they were signing. That moment has passed, but the terms have not changed. The question is what it will take to change them.

Editor's Note: For related reading, try:

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Marten Newhall | unsplash
Featured Research