person contemplating a colorful pop-art cartoon character painting in an otherwise black and white photo
Editorial

The AI Epiphany: It’s a Content Problem Hiding in Plain Sight

4 minute read
Chris Tubb avatar
By
SAVED
Workplace questions can be divided into two buckets: those with a set answer and those without. Knowing which is which helps enormously with AI efforts.

If your AI agent projects are failing the problem often isn’t the AI; it's the content you are feeding it. Digital workplace teams have the skills to fix this, but they need to reframe familiar disciplines as risk management, not housekeeping.

Two Kinds of Questions

A seemingly simple distinction has significant consequences once you take it seriously.

Some employee questions have a specific, documentable answer. "How do I book international travel?" has a policy, a booking tool and an approval chain. The answer exists. The task is anchored against something knowable. Whether AI can retrieve that answer reliably depends almost entirely on whether the content is any good.

Other questions do not have a documented answer and may require human judgement being applied to the specific situation. "Who is accountable for data governance in our Greek operations?" is not a question a content library can resolve. We can't document every aspect of our businesses before going bust. In any event, the answer might only become apparent in the moment someone sits down and works it out. The task is unanchored: there is no fixed target to retrieve.

Diagram contrasting “anchored” vs. “unanchored” AI tasks: a defined request like booking a trip maps to a knowable, documented task with clear grounding, while an open-ended question about responsibility represents an unknowable, undocumented task—highlighting that AI responds confidently to both despite differing reliability.

AI cannot tell the difference. It approaches both with equal confidence and produces a fluent, well-structured response in either case. That is potentially useful for anchored tasks. But for unanchored ones, it is a hallucination-as-a-service and employees reading the response have no way to know which kind of answer they have just received.

Unanchored tasks are far more common than organizations expect. First you need to identify them, and then you need to tell the agent not to answer that sort of question. Identifying your anchored tasks and your unanchored tasks is one of the more useful things you can do before expanding AI retrieval.

What Makes Content Dangerous Under AI

Anchored tasks can still go badly wrong. The content they depend on is frequently unfit for the purpose at hand. 

Outdated content is the obvious failure mode, but not the most dangerous one. When rival documents cover the same process with no clear indication of which is authoritative, AI synthesizes a response that is neither clearly right nor clearly wrong — but is presented confidently. Geographic and business unit variation that has never been explicitly disambiguated produces the same problem at scale: a global organization with 14 different expense processes gets an answer that is correct for one business unit and wrong for everyone else, with nothing to signal the difference.

Formatting is easy to overlook, with PDFs of PowerPoint slides being the worst offender. The ingestion process that feeds these systems flattens content into plain text, destroying the spatial relationships that give slides their meaning. Boxes and arrows become a sequence of nonsensical fragments. The content was never designed to be read this way, and AI cannot reconstruct what it cannot see.

The One Lever You Control

Outside of agent-level prompts, organizations have limited control over the model, the pipeline or how answers are constructed. Those decisions sit with the tech giants, not the digital workplace team.

What does sit with that team is the content: what goes into the system, in what form, with what structure, from what authoritative source.

The boundary between  "posted,"  "saved" and "published" is a fundamental change of purpose — content crafted deliberately, for a specific audience, to a defined standard.

Content governance, content design, clear ownership, publishing standards and publisher training and coaching have long been treated as nice-to-haves. AI reframes these disciplines into the primary controls to make AI retrieval trustworthy. The digital workplace field has been advocating for them for years. The AI deployment timeline is rarely calibrated to the content improvement timeline, but the lever is the same.

What Good Looks Like

The goal is not a catalogue of perfect information. It is an information estate that is trustworthy in the areas that matter most, with a clear and maintained boundary between what is in scope for AI retrieval and what is not.

In practice, this means identifying the tasks where a wrong answer has real operational, legal or safety consequences. For most organizations, this points to HR processes, compliance and regulatory guidance, and anything that varies across geographies or business units. Understanding this risk surface and placing it under management is an essential tool of control.

Good in this case looks like: a single authoritative source for each answer; clear ownership with a named review cycle; language that is explicit rather than hedged; structure that survives ingestion. It also means routing employees to a human when the question has no reliable documented answer. Escalation to a human doesn’t mean the AI implementation failed. It’s a sign that it has been designed around the ambiguity and complexity of how organizations really work.

What Comes Next

In the third post: I make the case for Provenance as a named discipline beyond what has been seen as "AI governance" on a per system basis, why no existing function can own this problem on its own, and what a coordinated approach to the epistemic layer of the digital workplace looks like.

Learning Opportunities

Editor's Note: Catch up on the first installment of this three-part series and other takes on readying your digital estate for AI:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Chris Tubb

Chris Tubb is an independent digital workplace and intranet consultant based in Brighton, UK, and co-founder of Spark Trajectory, a specialist consultancy helping large organizations with intranet and digital workplace strategy, governance, content, product selection, and employee journey research. Connect with Chris Tubb:

Main image: Simona Sergi | unsplash
Featured Research