Rube Goldberg machine showing how to flatten a lemon. many many steps
Feature

AI Agents Are Only as Smart as Your Worst Process Documentation

5 minute read
David Barry avatar
By
SAVED
Behind any AI agent deployment is an implicit question: can you describe, completely and without contradiction, how your processes work?

Every AI agent pitch has a hidden assumption: that organizations know how their processes work. Most don't.

Process documentation is notoriously poor in most enterprises: outdated, incomplete or found only in the heads of people who have since left. That is not a technology problem. It is an organizational one. An agent can only do work it can be taught to do. Teaching requires a clear process to follow. And in most enterprises, that description either doesn't exist, can't be agreed on or bears little resemblance to what happens.

Processes in the System of Record vs. in Practice 

ServiceNow thinks it has already solved the problem. The agent observes and replicates the years of workflows that have run through the platform — no documentation required.

Practitioners quickly dismantle the premise. The flaw is configuration drift, argues Pavan Madduri, a senior platform engineer at Grainger and a CNCF Golden Kubestronaut whose peer-reviewed IEEE research focuses on governing agentic AI in enterprise IT. An engineer gets paged at 2 a.m., applies a hotfix to stop the bleeding and never updates the ticket.

The system of record shows a clean process. But the actual system runs on undocumented workarounds and institutional memory the platform never captured. "If an AI agent is trained purely by observing the official workflow in the ticketing platform, it's learning a fantasy," Madduri said.

What the system of record captures is the process as intended, not as it evolved, said Ribulo co-founder Mike Miner. Beside every entrenched platform, he continued, a shadow operation has grown — the Excel file built because the system couldn't handle an edge case, the Slack thread that became a de facto approval chain, the knowledge living in one person's head because they were there when the original implementation was botched. The agent has no access to any of it.

"Claiming the record is the process is optimistic," Miner said.

Process Knowledge Is Human, Contested and Contextual

The problem runs deeper than bad data. Trailhead Communications founder and principal Barbara Roos spent a year building a program and change management center of excellence with process documentation at its core. 

What she found was that even among expert practitioners, there was no single right way to do the work, just approaches shaped by experience and judgment that resisted being written down. "Process knowledge is deeply human, contested and contextual," she said, "in ways that don't compress neatly into a system."

If the people who do the work every day can't agree on how it works, the agent has nothing to learn from.

AI Agents Expose What Organizations Don't Know

What happens when an agent meets an undocumented process? Anthony Pinto, founder of business process consultancy Veteran Vectors, built his entire methodology around this reality. 

Pinto describes the first step as the boring method: strip away every tool and platform and write the process down end to end on paper: workarounds, tribal knowledge and all the steps that exist only in someone's head. 

The exercise surfaces what years of running the process never did. "When you have to write a process down well enough for a machine to follow it," Pinto said, "every 'and then we just kind of figure it out' moment surfaces immediately." Not as a failure. As a revelation.

The flip side of this is what's written down rarely matches how work gets done, said Robbie Ruuskanen, marketing director at ET Group. Agents follow the documented path. Employees navigate by instinct, shortcut and expectation that was never formalized. 

"Processes that seemed fine start breaking when the system tries to run them consistently," Ruuskanen said. "That is usually the first sign that the process was never fully understood in the first place."

The agent is not the solution to the documentation problem. It is the diagnostic that makes it undeniable, Ruuskanen added. This is the gap that poor documentation creates: not just missing instructions, but missing awareness. Organizations often don't know what they don't know until a machine tries to do the work and can't.

Routing to a Human Works to a Point

Workday's approach with its Sana agent layer tries to mitigate the ambiguity. When the agent hits uncertainty, it's programmed to contact the person closest to that process for verification, learning over time. It is at least an acknowledgment of reality: documentation is poor, gaps will exist and humans will need to fill them. 

For Roos, that honesty matters. Human judgment isn't a fallback. It's a core input, especially when the institutional knowledge never made it into any system.

But honesty doesn't fix the architecture. Routing uncertainty to the person closest to a process assumes the person still works there, has bandwidth and is willing to become the QA layer for an automation project they didn't ask for. 

"At low volume it works," Miner said. "At scale, you've built an agent that generates a new category of interruption for your most knowledgeable people." The bottleneck doesn't disappear. It migrates to exactly the people least available to absorb it and most likely to be carrying undocumented process knowledge in their heads.

In cloud infrastructure, where Grainger's Madduri operates, the consequences of that gap are immediate and severe. An agent that guesses how to handle an undocumented workaround doesn't produce a minor error. It can trigger a catastrophic automated outage. His answer is to apply a strict policy-as-code and formal verification: unknown variable, proposed fix, instant evaluation against hardcoded safety boundaries. Violation means a hard block and a human handoff. No improvisation.

"You have to fence the AI in," Madduri said.

Learning Opportunities

Speed Is Different From Progress

A feedback loop can work. Narrow task, consistent exception flagging, human resolution the agent can observe and encode. "That's a feedback loop, and feedback loops work," Miner said.

But that is a long way from deploying an agent into a complex operational environment and trusting it to map the territory as it goes. "That's not learning," Miner continued. "That's hoping."

The harder truth is that poor process documentation doesn't become less of a problem when you add AI. It becomes more of one. "If you have clean, structured, well-maintained processes, AI makes those faster and easier," Pinto says. 

"If you have chaos, undocumented workarounds, inconsistent data, AI compounds that too. Runs your broken process faster and at higher volume than you ever could manually." The agent doesn't resolve the documentation gap. It scales it.

There's an implicit question being asked of any organization considering an AI agent deployment — and the vendor will never ask it — can you describe, completely and without contradiction, how this process works? Not how it was designed. Not how it appears in the system of record. How it works today, including the workarounds, the exceptions and the institutional knowledge stored in the heads of people who may no longer be there.

If the answer is no, and for most enterprises it is, the agent will find out. And when it does, the gap that was invisible will be running at scale.

Editor's Note: What other questions does the use of AI agents raise?  

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: adobe stock
Featured Research