a person sitting in a dryer in a public laundromat. Legs and feet are the only visible part
Feature

Why Are Knowledge Workers Still Cleaning Data?

5 minute read
David Barry avatar
By
SAVED
AI was meant to free workers. Instead, weak foundations force them to babysit algorithms and audit outputs.

Knowledge workers spend 30% to 50% of their time as "data janitors" — cleaning spreadsheets, reformatting information and hunting for data across disconnected systems. Steve Bevilacqua, principal consultant for Cella by Randstad Digital, puts it bluntly: we are overpaying experts to be organizers.

Organizations are pouring billions of dollars into AI augmentation tools with a simple promise: free employees from tedious tasks so they focus on high-value work. However, the vision of AI-powered productivity isn't just falling short. In many cases, it's making things worse. The result is more than frustration — it's lost innovation, mounting burnout and competitive disadvantage as talent is spent on tasks that were supposed to disappear.

Building AI on Weak Foundations

AI can't compensate for fundamentally broken data infrastructure. "AI is not magic; it cannot interpret sloppy data," Bevilacqua said. "When disorganized, 'dirty' data is dumped into these systems, the AI fails to return quality results, often resulting in perceptual errors."

Without structured, relevant and current knowledge content, investment in human copilot bots falls flat, said Jason Valdina, senior director of engagement channel strategy at Verint.

Data upkeep and content management practices of knowledge workers has become more important and strategic than ever. Most of the magic in AI augmentation happens downstream of well-groomed data. Successful use of generative AI relies on good knowledge content structure and governance.  While AI highlights topical trends and generates content based on prompts, the priority is on prepping data and content for front- and back-office workers — meaning less time making the knowledge workers themselves more efficient.

AI tools organize, classify and summarize data, but can't compensate for poor data foundations, said Jon Friskics, a senior technical author at Pluralsight. If the data being captured is incomplete or disorganized, workflows end up on shaky ground. AI tools run into the same problems that humans do.

Trading One Tedium for Another

Many AI tools create new categories of work rather than eliminate old ones, which Bevilacqua described as a new layer of verification labor.

"Because AI cannot turn garbage into gold, it will produce plausible-sounding inaccuracies when fed bad data," Bevilacqua said. "This forces the knowledge worker to double-check every output, cross-reference every citation and debug the logic."

The "human in the loop" has become a human babysitting a confused algorithm. The promised productivity gains evaporate as workers trade spreadsheet cleaning for algorithm auditing — a lateral move dressed up as progress.

Faisal Masud, president of HP Digital Services, has a name for this: the "AI productivity paradox." Technologies promise to make employees more productive but lead to more friction and burnout based on the state of existing workplace IT infrastructure. "AI can't deliver on its promise if that underlying IT infrastructure is broken," he explained.

The real value is in building solutions that use AI to manage the complexity of the digital ecosystem and fix the foundation first. But few organizations want to hear this. Foundation work is unglamorous. It doesn't generate headlines or impress boards. So, the cycle continues.

The distinction between helpful and harmful AI comes down to where it intervenes in the workflow. AI tools eliminate low-value work particularly effectively when added into the earliest parts of a workflow — tools that ingest, map and validate data, or continuously sync new information as it arrives, Friskics said. These upstream interventions prevent downstream chaos.

But AI that requires extensive oversight, manual correction or new review steps adds tasks. The question isn't whether to use AI. It's whether the foundation supports it.

Knowledge Work Needs a Redesign

The problem isn't primarily technical. It's how organizations are implementing AI.

Bevilacqua draws a parallel to offshoring decisions: Organizations implement AI without grasping its constraints. Vendor hype promoted General Intelligence while failing to explain what the technology could do. Buyers treated AI like a magic wand rather than a complex machine.

"You cannot automate what you do not understand, and right now, there are teams that simply do not understand their own data or the limitations of AI," Bevilacqua warned.

The disconnect between vendor promises and workplace reality is widening. Early adopters who rushed to implement AI are now quietly wrestling with the consequences: bloated tech stacks, confused employees and productivity metrics that haven't budged, or have even declined. Some organizations report that AI initiatives have actually increased employee turnover as frustrated knowledge workers leave for environments where they do real work.

Masud frames the deeper problem: AI will never have skin in the game. Unlike a human, it provides insights without career risk or fear of failure. He advocates for a RACI (Responsible, Accountable, Consulted, Informed) framework that means organizations strategically integrate AI along with human oversight. "AI is best positioned in the Consulted role, providing deep analysis to enrich human judgment, while humans remain Responsible and Accountable for the final outcome," he said.

Without this structure, AI outputs risk being inaccurate, irrelevant or even misleading "slop," especially when dealing with ambiguous, unstructured or domain-specific data. The sweet spot is augmenting human judgment with AI, not replacing it entirely.

Even when AI removes grunt work, employees often stay busy because jobs, incentives and metrics aren't redesigned. The saved time gets filled with new volume or "make-work."

Knowledge workers reclaim deep focus only when managers are held accountable and incentives reward strategic outcomes instead of mere activity. Valdina takes a different view, suggesting the real shift will come from management trust in AI — allowing automated workflows to handle more tasks with less oversight. The tension between these approaches reveals an unresolved question about the path forward.

What Nobody Wants to Hear About AI

Most enterprises struggle with measurements, mistaking adoption for impact. "Augmentation is truly working only when the value delivered per human hour rises sharply, while quality and happiness stay the same or improves," Masud explained. Few organizations track this.

Learning Opportunities

When measurement is done right, results can be striking. One prominent health insurance company implemented an AI-powered tool in its contact center to speed transfers between agents, Valdina said. A 38-second reduction in average handling time across 30,000 agents created $22 million in agent capacity.

But measurement alone isn't enough. Establishing metrics and adding observability before deployment are important for understanding how AI tools affect data, workflows and outcomes. Augmentation works only when team members evaluate AI outputs against agreed-upon criteria.

The path forward requires unglamorous foundational work. "Establishing data standards, improving governance, setting quality thresholds and aligning on how data will support decisions will greatly reduce prep time," Friskics said.

When teams know which data must be perfect and what can be merely directional, AI and automation deliver routine steps more confidently.  Stronger foundations and clearer decision rules help knowledge workers move to strategic thinking.

This is the work that doesn't get funded. It doesn't generate enthusiasm in all-hands meetings. It requires admitting that years of accumulated technical debt and poor data practices must be unwound before the AI investment pays off. Most leadership teams would rather buy another tool than face this reality.

The flip from 80% data prep to 80% strategic thinking will happen only when AI owns data prep and organizations redesign jobs, incentives and metrics to reward strategic thinking instead of activity, Masud said

Until leadership accepts that AI requires fixing what's broken underneath, not papering over it, knowledge workers will remain stuck cleaning data for tools that were supposed to set them free. The question isn't whether AI delivers on its promise. It's whether organizations are willing to do the hard, expensive, unglamorous work required to let it.

Related Articles:

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Mennie | unsplash
Featured Research