two people viewing a swirling bullseye painting in an art gallery
Editorial

The AI Heresy: Realizing Content Is the Critical Lever to Success

3 minute read
Chris Tubb avatar
By
SAVED
When enterprise AI fails, it fails confidently. Part one of three on why your content estate is the biggest risk in your AI rollout.

Employees always found it hard to get answers. How do I submit my expenses for the first time? Who's an expert on German taxation? Where's the best place to stay if visiting the Paris office? Most organizations' search tools never really worked. Employees learned to triangulate: checking three sources, asking a colleague, making an educated guess. The system was unreliable, but everyone knew it was unreliable, and everyone compensated for it.

AI changes that dynamic in a way that should concern anyone responsible for employee processes. When enterprise search fails, it fails visibly, showing no results or obviously wrong results. Employees revert to the next best option. When AI retrieval fails, it fails confidently and outputs a fluent, plausible, well-structured incorrect answer. And in high-impact cases, the employee has no way of knowing that the answer is wrong.

This problem is of a different magnitude to what came before, and our response will need to be more comprehensive as a result.

Not all Information Is Equal

To understand where AI works and where it doesn't, it helps to think about enterprise information in three layers.

domain awareness across personal domain, group domain and organizational domain

The first is the personal domain. This is your email, your files, your notes. You are the domain expert here. If AI misreads your own content, you will catch it.

The second is the group domain. This is your team's projects, shared processes and working documents. Again, the people using it are the people who made it. Errors are spotted quickly because context is shared.

The third is the organizational domain. This is everything employees need from other departments laid out like a series of internal services. HR policies, IT processes, procurement rules, travel guidelines, legal and regulatory requirements. Employees are not the experts here. They cannot evaluate results as they cannot spot errors in fields outside of their expertise. They take the answer on trust.

The third domain is where AI fails and its failure carries the greatest risk of impact, sunk cost and project failure.

Why the Organizational Domain Is Genuinely Hard

Organizations are not clean, well-documented machines. They are complex, contested and constantly changing political environments determined by budgets and available resources. Policies vary by geography and business unit. Documents go out of date and are neither updated nor deleted. Multiple versions of the same guidance exist with no indication of which is current. Terminology differs between teams. Authority is often unclear and not recorded explicitly within information systems.

Employees navigate this complexity because it is, in large part, what they are employed to do. Judgement, interpretation and knowing who to call when the documented process doesn't quite fit the situation. These aren't the broken edge cases. They are the daily reality of organizational life based on what is economically viable.

AI systems can only draw on what has been written down. They cannot absorb the informal knowledge, local variation or unresolved contradictions that employees quietly work around every day. When the underlying information is ambiguous or inconsistent, AI does not flag the uncertainty. It synthesizes an answer from whatever it finds and presents it with the same confidence it brings to everything else.

This means we must treat the organizational domain differently and with a lot more care.

The One Variable You Actually Control

Organizations have limited control over the AI model itself. They have limited control over the ingestion pipeline. They have very little visibility into how answers are constructed.

What they do control is the content:

  1. What content does the AI draw upon?
  2. In what form?
  3. With what structure?
  4. From what authoritative source?

Most organizations are dangerously underprepared here. Sprawling content estates built for human browsing with a we-better-keep-it-just-in-case attitude weren’t  designed to be the information substrate for AI systems answering questions on behalf of employees.

PDFs of PowerPoint slides, policy documents with no clear owner, guidance that hasn't been reviewed in three years, multiple pages covering the same process with no indication of which is current. At first glance, it might appear to be a ready corpus for grounding, but  it will degrade the quality of every answer the system produces.

The good news is that this is fixable. Not quickly, and not without effort, but the levers are familiar: clear ownership, publishing standards and review cycles. The disciplines that well-run information programs have been practicing for years turn out to be the foundation that AI-ready information management requires, and this time around we have a willing helper in the AI tools themselves. 

Learning Opportunities

In the next post, we will look at what that means in practice: Which content problems create the highest risk, how to identify where the danger zones are in your own organization, and what good looks like when you are designing information for both human and machine audiences.

Editor's Note: Catch up on other practices that make information sprawl AI-ready:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Chris Tubb

Chris Tubb is an independent digital workplace and intranet consultant based in Brighton, UK, and co-founder of Spark Trajectory, a specialist consultancy helping large organizations with intranet and digital workplace strategy, governance, content, product selection, and employee journey research. Connect with Chris Tubb:

Main image: unsplash | Adrien Olichon
Featured Research