When you finally become a boss, your subordinates may be AI employees. They’ll work as hard as you want them to, without asking for time off, getting sick or checking out after lunch.
Of course, that’s probably not what you hoped for.
Still, some experts argue that as soon as 2030, most employers will command more AI agents than actual employees. Speaking at CES earlier this month, McKinsey & Company Global Managing Partner Bob Sternfels said that 25,000 of his company’s 60,000 “employees” are already AI agents ready to be deployed at will — up from just 4,000 in 2024.
That’s a big shift. Yet beneath the hype is an uncomfortable reality: management is changing faster than most managers are prepared for.
AI Employees Don’t Wait for Instructions
Until now, AI hasn’t done much without prompting. A human asks a question. A system responds.
For AI agents to succeed, that relationship has to deepen. Managers must train AI employees on goals, constraints, priorities and what “good” actually looks like. Once that foundation is in place, depending on the function, these systems can act on their own.
In other words, they stop being software that helps with work and start becoming software that does work.
Powered by generative AI, machine learning and natural language processing, agents can plan, reason, react and take action autonomously. Researchers describe this as a shift from static tools to agentic systems — software capable of executing multi-step workflows with minimal human intervention.
Vin Vashishta, founder and CEO of V Squared AI and author of "From Data to Profit" offered this analogy: “A large language model is a brain in a jar. It knows facts, patterns and language. An agent is that same brain, with hands and a plan.”
AI Employees Already at Work
Pretty much anything that happens on a computer is a candidate for AI agent intervention.
“Almost every valuable task can be broken down into three distinct phases: asking the right question, execution and evaluation. For most of human history, human workers have had to do all three. But the defining characteristic of this era is that AI is getting astonishingly good at Part 2: Execution,” wrote Stanford professor Erik Brynjolfsson in his TIME magazine article, “AI Changed Work Forever in 2025.”
Companies are already testing the limits of AI employees. They're deploying agents across customer support, sales operations, internal workflows, legal review and research.
At Mercari, Japan’s largest online marketplace, AI sales development reps answer product questions around the clock, qualify leads, handle objections and move deals forward.
At Allianz, seven integrated agents — Planner, Cyber, Coverage, Weather, Fraud, Payout and Audit — handle entire job functions, reducing claim processing and settlement time by as much as 80%.
At Dun & Bradstreet, five agents support credit risk, supplier evaluation, compliance, sales and marketing workflows.
At Croud, AI agents embedded in Google Workspace conduct deep research and analysis once spread across multiple teams.
At One New Zealand Group, agents help answer customer questions, upgrade plans, create service tickets, monitor power failures, forecast demand and recommend actions during weather-related disruptions.
The pattern is consistent. These systems work independently until coordination is required or bottlenecks emerge. That’s when the human in the loop — often called the manager or orchestrator — steps in.
AI Agent Adoption Is Broad. Production Is Not.
Nearly nine in 10 companies report using AI in at least one business function. According to a PwC survey of 300 senior executives, 79% say they’re experimenting with agentic AI. Thousands of businesses have launched projects on platforms from Salesforce, Microsoft, Oracle, and Google.
Virgin Voyages now uses more than 50 AI agents to generate thousands of hyper-personalized ads and emails simultaneously.
And yet, the adoption numbers tell a more cautious story.
As of mid-2025, only 8.6% of companies had AI agents running in full production. Nearly two-thirds were still stuck in pilot mode.
In other words, companies love the idea of AI employees. They struggle with reality.
If Brynjolfsson's article describes where work is headed, John Thompson, author of "The Path to AI: Artificial General Intelligence: Past, Present and Future" explains why it’s taking longer than expected.
“Agents won’t get too complex until later 2026 or 2027,” Thompson said. “Control and governance are still missing.”
The University of Michigan professor also questioned whether most organizations have documented their workflows in enough detail to build reliable agents in the first place. Even then, companies still face a more fundamental decision: is an agent actually the best way to do this work?
Yes, AI Agents Fail – Sometimes Spectacularly
Many AI systems perform well in controlled environments, then falter in real business conditions — messy data, outages, edge cases and unpredictable users.
Air Canada was ordered to compensate a passenger after its chatbot provided incorrect refund information. Replit’s AI coding agent deleted a production database despite explicit instructions not to do so. McDonald’s pulled automated order-taking from more than 100 drive-thrus after viral clips showed the system repeatedly adding “hundreds of dollars of McNuggets” to orders.
Media organizations and law firms weren’t spared. The Chicago Sun-Times published a summer reading list featuring books that didn't exist. A lawyer representing Mike Lindell admitted to filing a brief generated by AI containing nearly 30 defective citations, including fictional cases.
Experts increasingly agree these failures aren’t about model quality. They’re about poor context management.
As of April 2025, even the most advanced AI agents could complete only about 24% of assigned tasks. They drown in irrelevant information, misunderstand priorities or misuse tools — not because of technological deficiency, but because no one taught them how the work actually gets done.
When humans who understand workflows aren’t involved, systems fail. Employees either sabotage the tools — or abandon them.
When Companies Reverse Course
The backtracking has been awkward.
Forrester found that 55% of employers regret laying off workers in anticipation of AI-driven efficiencies. Gartner reports that half of executives who planned major customer service reductions have since abandoned those plans.
Klarna cut its workforce by 22% in 2024, then reversed course after service quality declined. IBM quietly rehired staff after automation-driven layoffs.
When AI bets fail, companies face a choice: rehire at prior salaries or quietly fill gaps with lower-cost offshore labor. Most choose the latter.
Meanwhile, the layoffs continue. Nearly 55,000 U.S. job cuts in 2025 were attributed to AI restructuring. Amazon eliminated 14,000 corporate roles. Microsoft cut roughly 15,000 jobs. Salesforce CEO Marc Benioff said AI agents helped reduce customer support headcount from 9,000 to 5,000.
Yet only 16% of workers showed high AI readiness in 2025, a figure expected to reach just 25% in 2026.
Companies are deploying AI employees faster than they’re learning how to work with them.
Spectacle Vs. Substance
This isn’t just about cutting headcount, but that is part of it. The real value emerges when routine execution disappears.
As PwC and others note, the bottleneck isn’t technology. It’s skills, data foundations and clarity about where AI should — and should not — be trusted.
Brynjolfsson calls the emerging human role the “Chief Question Officer,” responsible for orchestrating AI. Microsoft CEO Satya Nadella echoes the same responsibility:
We have moved past the initial phase of discovery and are entering a phase of widespread diffusion. We are beginning to distinguish between ‘spectacle’ and ‘substance.’ … What matters is not the power of any given model, but how people choose to apply it to achieve their goals.
The Manager Becomes the Orchestrator
As AI employees become part of daily work, the role of management changes fundamentally.
Workers will need to spot agent mistakes, connect agents into teams and continually redefine what those agents should do next. The job isn’t supervision; it’s orchestration.
That means deciding which problems are worth solving, framing them clearly, evaluating results and taking responsibility when things go wrong.
When execution becomes commoditized, value shifts to judgment.
So, when that long-awaited management role finally arrives, it won’t be about overseeing people doing routine tasks. It will be about managing systems that do the routine work and the people who do everything else.
The AI workforce is already here. The only open question is whether managers will be ready to manage it.
Editor's Note: Catch up on other thoughts on AI agents below:
- The Fake Startup That Exposed the Real Limits of Autonomous Workers — The Carnegie Mellon study confirmed what many suspected: in spite of the promises of world-dominating results, agentic AI isn’t ready to run the ship.
- Rethink Your HR Strategy to Include AI Agents — As AI agents make their way into our workforce, HR is called to adapt its workforce strategy.
- What Real AI Agents Are – and Aren't — The rise of agent washing shows why AI fluency matters at the leadership level.