woman walking underneath a clear umbrella
Editorial

The AI Revelation: We Need to Govern Meaning as Well as IT Tools With Provenance

5 minute read
Chris Tubb avatar
By
SAVED
AI exposes a dangerous gap: no one owns the layer of meaning across your organization's information. Provenance can act as a safeguard.

We've been arguing that getting content in order is the single most effective thing organizations can do to improve AI performance. Doing so will improve performance, but it is not the whole story. Even organizations with genuinely good content governance face a structural problem that no amount of tidy SharePoint sites will solve: nobody owns the layer of meaning across the whole digital workplace. AI makes that gap dangerous in a way it never was before.

The Gap Nobody Owns

Ask who in your organization is accountable for the coherence of information across the digital workplace and you will get a long pause followed by several competing partial answers.

IT owns the infrastructure but not the meaning. Communications owns the senior manager narrative but not whether the policy documents behind it are accurate or consistent. Legal owns compliance but not whether guidance is interpreted the same way in different parts of the business or legislations. HR owns its content but not how it interacts with Finance or Procurement content when an employee asks a question that crosses those boundaries. Business areas own their own domains, but have no view of the interactions between them.

No single function is accountable for whether the information estate as a whole is coherent, current and safe.

This was manageable when employees were the filter. A member of staff who found three conflicting answers would use their experience, ask a colleague and apply their judgement. An unreliable system, but everyone knew that and compensated accordingly.

AI removes that filter. It synthesizes from whatever it finds and presents the result with the same confidence it brings to everything. The ownership gap that was previously an inconvenience is now a misinformation liability.

Why Agents Make it Worse

If an AI chatbot makes an error, it's recoverable in that the employee can say "Are you sure? Can you double check that?" The errors compound when an AI agent chains decisions and takes actions without human involvement. A policy interpreted incorrectly in step one shapes everything that follows. By the time an employee notices something has gone sideways, the agent may have completed several downstream steps on the basis of that first error.

Few organizations currently know which parts of their information estate are safe to ground an agent on and which are not. Those decisions are being made based on technical convenience and the enthusiasm of process owners, not by the coherence and authority of the information those agents will draw on.

What Provenance Is

We propose a field called "Provenance" to act as a safeguard that closes this gap. Provenance is a coordination and stewardship role focused on the layer of meaning. It ensures the content an organization's AI systems draw on is authoritative, coherent, correctly bounded and maintained.

Framework diagram showing a journey-led operating model across programme, plan, procurement, and delivery layers, mapping employee journeys from trigger to fulfillment. It introduces “provenance” as a cross-functional governance layer responsible for ensuring AI agents, content, and processes are authoritative, coherent, and safely orchestrated across the organization.

We’re not creating a new governance bureaucracy or a team that sits above existing functions and issues instructions. It’s a coordinating role that asks existing accountabilities in IT, HR, Legal, Communications and the business to work together in an AI context. It defines what content is in scope for AI retrieval, how agents are governed, where human judgement must be preserved, and how drift and contradiction are caught before they become events.

Provenance is distinct from what is currently called "AI governance." The latter typically means per-system controls: access permissions, acceptable use policies and data handling rules applied product by product. It is an IT-level response to an IT-level view of the problem. Governance says nothing about the accuracy of the content a system draws on, the consistency of replies to the same question across different agents, or whether the organization knows which parts of its information estate are safe to use. Provenance operates at a different level, governing the meaning AI tools consume and produce and balancing overlaps, conflicts and risks.

Provenance is based on established disciplines: Content governance, information architecture, publishing standards and authority management. What is new is that they need to be exercised across the whole organizational domain simultaneously, by someone with the cross-functional remit to do so.

Who Should Be Accountable

Accountability does not currently exist in most organizations, and creating it requires a deliberate decision.

In practice, Provenance sits most naturally close to the digital workplace or IT function, as it is the only function that has sight of the whole information estate rather than a single domain. But it requires explicit cross-functional authority: the ability to define standards that HR, Legal, Finance and Communications are expected to meet, and the standing to resolve conflicts when they disagree.

For most organizations, the decision is not which team to give this to. It is whether to treat this as a serious organizational design question or to leave it unresolved and absorb the consequences.

Where We Would Start

Organizations don't have to address all of this in one go. Don’t try to tackle the whole content estate at the start. Instead focus on the employee journeys that carry the highest risk.

Employee services information is the most exposed area in most organizations. HR processes, compliance guidance, travel and expenses, IT support: these tasks affect every employee, where the organization is the expert. When something goes wrong, employees cannot spot the error and act on it. The risk is not theoretical.

The most important early question to ask within those journeys is which tasks are genuinely answerable from documented content and which are not. As described in the previous post, anchored tasks (where a specific, authoritative answer exists) can be made safe with the right content investment. Unanchored tasks, where the answer depends on human judgement or simply hasn't been written down, need to be identified and excluded from AI retrieval before they become a source of confident misinformation.

Diagram contrasting anchored and unanchored tasks: defined, documented requests such as booking travel are grounded in reliable content, while open-ended questions lack clear ownership or documentation—highlighting how AI responds confidently to both despite differing levels of reliability.

Large organizations also have unresolved variation across geographies and business units. Employees in each region learned the local reality through experience and colleagues. AI does not learn that way. Surfacing and arbitrating that variation and deciding what the authoritative answer is across different contexts is unglamorous work, but is foundational to safe AI retrieval.

Employee journey map for travel and expenses, showing how tasks span systems like intranet and Concur. The diagram highlights decision points, system interactions, and evaluation criteria—such as authority, consistency, and AI readiness—to assess whether tasks are safe for AI execution and where risk or ambiguity exists.

Don’t wait for perfection before proceeding. Take a clear-eyed view of where the risk surface is, and make a deliberate decision to bring it under management.

The Cost of Waiting

One technology governance principle applies directly here. Systems are easy to govern early on and difficult to govern once embedded. The decisions about what an AI estate is grounded on, how agents are scoped, and what content is treated as authoritative are far easier to make before those systems are in production across the business.

AI pilots are still in the window where the future is malleable. Once they’re embedded in daily workflows, reshaping the underlying layer of meaning becomes a significant and costly undertaking.

Learning Opportunities

The regulatory dimension adds a further consideration. Without this governance in place, regulated organizations that cannot explain how an AI system reached a conclusion, or demonstrate the authority and currency of the underlying content accumulate audit and liability exposure with every deployment.

Provenance is not a nice-to-have for organizations that care about employee experience, although it will benefit there too. It is the governance function that makes AI deployment defensible and ultimately more valuable.

Editor's Note: Catch up on part one and part two of this three-part series. For related topics, read:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Chris Tubb

Chris Tubb is an independent digital workplace and intranet consultant based in Brighton, UK, and co-founder of Spark Trajectory, a specialist consultancy helping large organizations with intranet and digital workplace strategy, governance, content, product selection, and employee journey research. Connect with Chris Tubb:

Main image: Jack Finnigan | unsplash
Featured Research