a pig looking at an empty trough. or is it the trough of disillusionment for AI? I'll show myself out
Feature

Workslop Marks a Learning Curve, Not the Limits of AI

5 minute read
Lance Haun avatar
By
SAVED
AI’s awkward tween years are here: polished outputs, messy reality. “Workslop” is spreading, but history shows the dip comes before the real gains.

Every breakthrough technology has its awkward tween stage. Typewriters once slowed offices as clerks learned the keys. PCs sat on desks for years before productivity numbers caught up. 

Today, AI is in that same messy middle.

Generative AI was pitched as an easy productivity booster. But for many organizations, the reality is more friction than results. The deliverables look great, yet managers and peers are spending time cleaning up the mess. 

Researchers at Stanford’s Social Media Lab and BetterUp call this “workslop.” On the surface, the work appears complete. In reality, it lacks depth and nuance and has to be redone. According to the research, roughly 40% of U.S. workers say they’ve received it in the last month, with each instance taking nearly two hours to fix.

If that sounds bad, that’s because it is. But is it really a crisis?

Any new technology comes with a predictable dip in productivity before the gains arrive. We shouldn’t expect AI to be any different. The inflated expectations of “AI everywhere” had to hit reality eventually. The costs we’re paying now are less about broken technology and more about learning how to use AI in a way that actually gets real work done.

Workslop and the Mirage of Good Work

Unlike a typo-riddled email or a clunky spreadsheet, AI output carries a surface credibility that makes its flaws harder to detect. 

The biggest risk is misplaced confidence in these outputs. When a person hands workslop off to colleagues, it carries three costs:

  • Time: Colleagues must wade through confident but shallow output, diagnose the flaws and redo the work.
  • Scale: AI produces low-value content at a speed no human could match, multiplying the cleanup load.
  • Trust: Repeated encounters with workslop erode confidence in both the sender and the tool itself.

The pattern is clear: AI creates output that feels complete, but the hidden deficiencies transfer effort downstream. Even if you’re expecting an AI output, it can be harder to fix and more challenging to assign positive value to it. 

But why didn’t we expect there to be pains with AI adoption? An MIT study this year found that 95% of corporate AI pilots delivered no ROI, reflecting the painful cost of early adoption.

History Rhymes, but AI Has Quirks

Every major workplace technology has followed a familiar arc: disruption first, productivity later. 

PCs in the 1980s sparked the famous “Solow Paradox.” Economist Robert Solow said that you could “see the computer age everywhere but in the productivity statistics.” Even the internet went through its own dot-com bubble, where hype outpaced real value before settling into business fundamentals.

Sound familiar? 

Economists describe this as the productivity J-curve. Adoption brings a dip, but eventually, the right path forward is found. A study of U.S. manufacturers adopting industrial AI found an initial 1.3 percentage point drop in productivity, with some estimates of even sharper early losses. The rebound came later, after firms invested in new skills, data infrastructure and redesigned processes. Over time, those same firms outperformed peers in both productivity and market share.

Generative AI is now entering its own J-curve moment. Unlike past technologies, though, adoption is happening at breakneck speed. Surveys show that more than 75% of knowledge workers have already tried generative AI, many in just the last year. Rapid AI adoption without new skills, infrastructure and processes seemed destined to create issues. 

That speed compresses the pain, too. The gap between capability and maturity is wider than it was for typewriters or PCs, and the result is workslop. 

Addressing Workslop Without Overreacting

Workslop is frustrating, but it isn’t fatal. The bigger risk is how companies respond. Too many leaders default to extremes, either banning AI outright or mandating its use in every corner of the business.

Both reactions miss the nuance and, ironically, the human component of the equation. The real challenge is guiding people through adoption in ways that make work better, not just faster.

Lead With Purpose, Not Mandates

AI policies often sound like bumper stickers: “Use it everywhere,” or worse, “show me the AI in this project.” Approaches like that lead to workslop more than the deficiencies of the technology itself.

Leaders can do better by clarifying when AI adds value and when it doesn’t. 

Guardrails around data privacy and fact-checking matter, but so do incentives. If performance reviews reward visible activity over meaningful results, workslop is inevitable.

Encourage Piloting

Most employees aren’t trying to game the system but they are unsure how to use these tools well. The researchers describe the difference between “passengers,” who let AI do the work for them, and “pilots,” who stay in control and use the tool as an accelerant.

Learning Opportunities

It’s a shift in both perception and operation. 

Real training should be designed to help people make that shift, though. Beyond prompt mechanics, they need practice in spotting weak outputs and knowing when to step away from the tool altogether. 

That’s what keeps AI from becoming a crutch and an accelerator of substandard work.

Redesign the Workflow, Not Just the Toolset

The hardest fix is also the most essential: redesigning how work gets done. Generative AI doesn’t fit neatly into legacy processes, and bolting it onto legacy workflows just multiplies the flaws.

McKinsey’s research shows that real productivity gains come only when companies re-engineer tasks for human–machine collaboration. That means mapping where AI should draft, where humans refine and how teams coordinate in between.

It’s unglamorous, boring work. But without it, AI can’t scale beyond experiments.

Fix With Intention and Attention

Workslop makes the costs of inattention visible. The time lost to bad drafts and shallow reports is the price of rushing in without purpose.

The upside is that the problem is fixable. With guardrails, literacy and workflow redesign, AI can move from nuisance to net gain. Organizations that take that path will turn today’s messy middle into tomorrow’s advantage.

The Real Test Is for Leaders

Workslop feels like a crisis because rapid adoption and marketing hype made Generative AI feel like it should be plug-and-play. Leaders ask why the results are so bad when ChatGPT, Gemini or Claude can converse at a near-human level. 

History suggests those gains will come. Technology has always had an awkward stretch before becoming indispensable, and generative AI should be no different.

The test for leaders is how they navigate the dip. Pushing through AI's awkward tween stage means setting clear expectations, training people to use AI tools well, and redesigning work so humans and machines complement each other. None of that is quick, but all of it is necessary.

Editor's Note: Where are companies seeing AI gains?

About the Author
Lance Haun

Lance Haun is a leadership and technology columnist for Reworked. He has spent nearly 20 years researching and writing about HR, work and technology. Connect with Lance Haun:

Main image: Elaine Alex | unsplash
Featured Research