Charlie Chaplin imitator
Feature

Why Mass AI Deployments Are Productivity Theater

6 minute read
David Barry avatar
By
SAVED
Enterprise AI adoption is exploding, but most deployments remain add-ons. Why AI isn’t transforming work, and what it takes to make it infrastructure.

When Cognizant rolled out Claude AI to 350,000 employees, one of the largest enterprise AI deployments to date, it was more than just another vendor win. The move came amid a broader wave of massive enterprise AI contracts: OpenAI said it now serves more than 7 million workplace seats, with ChatGPT Enterprise growing ninefold year-over-year, while Microsoft claims 70% of Fortune 500 companies have "adopted" Copilot across its 430 million commercial seats. 

Across consulting, finance and enterprise services, these deployments raise a bigger question: Is generative AI now a permanent layer of the workplace, or simply the latest tool bolted onto already complex digital environments?

The answer is unambiguous: AI remains firmly in add-on territory. Enterprises are distributing tools without rethinking how work gets done, leaving AI stranded outside the flow of work.

The gap between deployment numbers and transformation is wider than most organizations recognize.

Table of Contents

AI Distribution Is Not AI Transformation

The problem starts with a misunderstanding about what makes technology core to an organization. "Scaling access is just a distribution play; it's not the same thing as integration," said Mridul Nagpal, CFO at Krazimo. Hand AI tools to thousands of employees and you still see minimal adoption, inconsistent usage or worse, according to Elaine Palome, regional head of human resources, Americas at Axis Communications. People work around tools that disrupt rather than enhance their work.

Microsoft's Copilot figures illustrate this gap. Despite claiming 70% Fortune 500 adoption, most organizations are running pilots rather than enterprise-wide deployments, staging rollouts while addressing governance challenges that must be resolved before broader use. The distinction between having seats available and integration matters.

Scale deepens the problem. Deploying AI widely does not automatically make it core, said Christoph Fleischmann, founder and CEO of Arthur Technologies. If anything, it deepens silos. 

AI's biggest weakness is missing context. It operates in its own bubble, disconnected from the information and interaction patterns that define how work happens. Giving it to more people doesn't resolve this disconnect.

What It Takes for AI to Become Infrastructure

The threshold for becoming infrastructure is clear. AI becomes core only when the path of least resistance includes it inside the systems where work already happens: IDEs, CRMs, helpdesks and wikis, said Yuriy Zaremba, CEO and co-founder at AiSDR. If people have to leave their workflow and "go to the bot," it stays a side tool, used sporadically, not structurally. 

Transformation requires two shifts, according to Nagpal: Individual "hacks" must converge into repeatable, shared habits across teams, and those habits must be incorporated into the tools people use.  Anything less is a side activity.

Zaremba frames the distinction more starkly. AI becomes infrastructure when it stops being a destination and becomes the default substrate inside the work itself: where tickets are written, code is reviewed, knowledge is retrieved and decisions are documented. Stay separate, and it just exacerbates existing chaos. Faster drafts, faster misunderstandings, more tools and more context switching.

This requires process redesign, not deployment. Organizations need to rethink how work gets done, Palome said. What steps can be eliminated? What decisions can be automated? What handoffs can be streamlined? Adding AI to existing processes without changing them creates overhead, not integration. Most enterprises haven't done this work.

The Chat Island Problem

The technical architecture of most enterprise AI deployments shows why transformation claims ring hollow. Separate chat destinations create what Zaremba calls "chat islands": useful answers that don't translate into completed work. Fleischmann sees this pattern everywhere. AI remains layered on top, or even off to the side, disconnected from collaboration, decision-making and the formats where work happens. Individual chat interfaces are a poor fit for collective work.

Real embedded AI drafts tickets in the helpdesk, updates fields in the CRM and generates pull request descriptions in the IDE. It turns AI from a conversational layer into an execution layer. The reality falls short. Early deployments default to chat interfaces and side-car AI copilots, which Nagpal acknowledged are useful. They lower the barrier to entry and let people move fast without heavy IT integration. But they're not sustainable at scale. Jumping between windows is too cognitively demanding for long-term use.

The difference between experimentation and operation comes down to expectations. "Experiment" is fine early, but at scale ambiguity creates uneven norms, Zaremba said. Some teams over-trust AI, while others ignore it. Mature rollouts define where AI is expected, where it's optional and where it's prohibited, so quality and accountability don't drift. Without that clarity, AI remains experimental. That's the definition of an add-on.

Nobody Owns AI Outcomes

The governance vacuum reveals how far enterprises remain from treating AI as infrastructure. "This is THE unresolved governance question organizations struggle with," Palome said. Ownership is fragmented. 

Shared ownership sounds reasonable until you examine what it means in practice. IT should own reliability and governance, the business should own outcomes and value creation and individual employees should own how AI is applied in their specific contexts, Fleischmann agreed. 

But that's theory. In practice, the owner changes as the technology matures, according to Nagpal. Individuals own it at the start, figuring out how to save an hour on specific tasks. Business units take over in the middle phase, owning KPIs such as cycle times or customer satisfaction because AI becomes a formal part of their process. At scale, IT and platform teams own reliability, cost and safety.

The problem is IT shouldn't be responsible for business outcomes unless it has built a product to solve a business problem. Otherwise, IT owns the "plumbing" and the business owns the "value." Without explicit ownership structures, responsibility evaporates. When any component is missing, you get risk, stagnation or blame shifting.

Successful organizations adopt a shared ownership model with clear delineation and executive leadership accountability across all three layers, Palome said. Most enterprises haven't reached this level of maturity. Infrastructure has clear ownership. Add-ons don't.

Faster AI Deployment Doesn't Mean Better

The metrics enterprises use to declare AI success indicate they don’t understand what they've deployed. "Speeding up a broken process just creates more junk, faster," Nagpal warned. Speed masks damage. The right signals are end-to-end performance metrics: time-to-completion, rework rates, defect and incident rates, customer outcomes and knowledge reuse, according to Zaremba. If output volume rises but rework and escalation rise too, AI didn't transform work, it increased inefficiency already in the system.

Improvement shows up as better quality and consistency, not velocity. Companies should measure the quality of data and decisions generated by AI, indicators of innovation, skills development and strategic value creation, Palome said. The key is effectiveness, not efficiency. 

Learning Opportunities

Fleischmann set the bar higher still: AI improves work when it improves decision quality, reduces rework and leads to clearer ownership and accountability, not just faster output. If AI increases confidence, coherence and follow-through across teams, it's improving work. But the bar for this is high, and most deployments don't clear it.

Ironically, tools meant to reduce cognitive overload create it, resulting in productivity declines, Palome said. She recommends an AI "tool audit" examining time spent learning vs.doing, the cognitive burden of verification and error correction and the degree of workflow integration vs. disruption. Most organizations don't prevent AI from increasing cognitive and tool overload. 

The situation is likely to get noisier before it gets better, Fleischmann added. The core issue isn't just overload, but that distinguishing insightful contributions from low-effort AI-generated content is becoming harder. "If we aren't careful, we're just giving people five different 'assistants' to manage," Nagpal said. 

The solution is consolidating entry points. Instead of generic chat for everything, organizations should aim for one primary interface per persona. If it's not making work feel "lighter," it's adding to the noise. The inflation of what Fleischmann calls "work slop" compounds the problem: half-baked ideas that sound convincing but lack substance because AI doesn't understand business context. Employees are being pushed to use AI without guidance on its limitations or second-order effects on productivity. Organizations should deploy fewer, better-integrated tools rather than a new AI tool for each process, Palome said, but are doing the opposite.

The path from add-on to infrastructure requires spotting successful "scattered experiments" and turning them into standardized playbooks, according to Nagpal. That transformation requires embedded AI that disappears into existing tools, explicit ownership models spanning IT and business functions and measurement frameworks that prioritize quality over speed. None of these conditions exist at scale today.

Enterprises celebrating seven-figure seat deployments as transformation have simply distributed an add-on more widely. They've confused access with adoption, adoption with integration and integration with infrastructure. What they have is expansive chaos at scale.

Editor's Note:

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: İsmail Efe Top | unsplash
Featured Research