For years, the pitch on generative AI in the workplace came with a promise: it would absorb the tedious tasks so people could focus on work that matters.
Most of us filed that alongside other technology promises that sounded good in theory but failed in reality. Collaboration tools created more work and, sometimes, more meetings. Automation platforms required a full-time administrator to function (and training, and onboarding). Knowledge management systems became graveyards of outdated documents nobody maintained.
New research suggests this particular promise might be different. Not universally, not without cost and not in the ways most companies are currently acting on it. But the core claim — that AI frees people to spend more time on the work they were hired to do — is showing up in real data.
The more important finding isn't that the promise is real, though. It's who benefits most when it is. That's the part executives should pay closer attention to, because it goes against the workforce decisions many of them are already making.
How AI Changes the Nature of Work
A study from MIT researchers tracked nearly 187,000 software developers on GitHub for a year before and two years after the June 2022 launch of GitHub Copilot, a generative AI coding assistant. The sample size and method matters. These are working developers, on a real platform, making real decisions about how to spend their time. The research design allowed the team to compare behavior among developers who received free access to Copilot against those who didn't, before and after the tool launched.
The headline result is enticing: developers with access to Copilot increased the share of their time spent on core coding by 12.4% and reduced the time they spent on project management tasks by nearly 25%. The work people were hired and paid to do expanded. The overhead that accumulates around technical work — reviewing support requests, managing issues, coordinating across contributors — contracted.
That's the AI promise, more or less, delivered.
There's a secondary story here that's a little unusual: The workers who benefited most from AI assistance were not the senior engineers. They were the junior ones.
Less experienced developers saw the largest increases in time devoted to core coding work — consistent with earlier research on call-center workers, where employees with less tenure gained the most from AI tools compared with their more experienced colleagues. Copilot users also increased their cumulative exposure to new programming languages by nearly 22% relative to those without access. The study showed that Copilot helped the people who were still in the process of becoming experienced.
The Collaboration Decline Deserves More Attention
But there are two catches embedded in this research. The first is inside the data itself.
While developers gained time on core work, peer collaboration dropped by nearly 80%. Developers simply stopped asking colleagues for help. The informal exchanges that typically accompany debugging, architectural decisions and figuring out how an unfamiliar codebase works largely disappeared along with the project management overhead.
Developers got faster, but they also got more isolated.
If no one has to ask anyone for advice anymore, is that good or bad? Both sides of the good/bad dichotomy have a point. Avoiding unnecessary back-and-forth has genuine value. Time spent resolving issues that better code would have prevented is time that could have gone elsewhere.
The research found that Copilot helped developers produce more accurate code, which reduced the need for peer review and issue resolution. Some of the collaboration decline reflects fewer problems, not just fewer conversations.
But human interaction at work is not just a mechanism for fixing code mistakes. It's how institutional knowledge moves between people. It's how newer employees learn not just what to do but how to think about what to do. It's how teams develop shared judgment that makes them more than the sum of individual contributors.
When AI handles enough of the execution that nobody needs to ask anyone for help anymore, organizations erode the connective tissue that makes teams functional over time. Most organizations haven't seriously reckoned with what that costs, or what it demands in terms of deliberate design to offset.
How Companies Respond to AI's Potential, for Better and Worse
The second catch sits outside the research. It's how executives are translating AI's potential into workforce decisions.
The strategic logic for pulling back on entry-level hiring appears smart from a distance. If AI produces first drafts, generates and reviews code, summarizes documents and handles the research and coordination tasks that junior employees have traditionally performed, why staff those functions with people? It sounds good and it's showing up in practice: hiring freezes, intern programs that shrank or requisitions that closed and never reopened.
Research tracking U.S. payroll data found that workers between 22 and 25 years old in AI-exposed occupations have already seen meaningful employment declines, while senior employment remained stable.
The MIT research suggests this response gets the lesson backwards, with a significant long-term cost.
Junior employees are not simply a cost center for routine tasks that AI now absorbs. They are the people building the organizational knowledge that companies will depend on five and 10 years from now.
They are learning which problems are worth solving, when to push through a challenge versus escalate it, how to navigate the gap between what a tool produces and what a situation requires and how to communicate technical constraints to non-technical stakeholders. None of that arrives by accident. It accumulates through repetition, proximity to more experienced colleagues and low-stakes mistakes made inside a real organization over time.
What the Copilot research shows is that AI tools, used well, compress that development curve. They don't replace it. An organization that cuts entry-level roles doesn't capture the productivity benefit of AI-accelerated junior talent. It eliminates the pipeline and books the cost savings. Those are different outcomes. MIT's Frank Nagle stated last year that treating the replacement of junior employees with AI as a cost-cutting strategy is "short-term thinking at the expense of investing in the future."
The timing compounds the problem. Companies are restructuring their workforces around AI's anticipated productivity gains before those gains have materialized in any systematic way. A 2025 OECD report observed that despite years of generative AI adoption, the impact remains essentially invisible in economy-wide productivity statistics. Realizing those gains requires skilled people, appropriate applications and the kind of complementary organizational investment that most companies are still working through.
The decisions being made now about entry-level hiring will shape organizational capability for years. The evidence base for those decisions is considerably thinner than the confidence behind them.
AI's Promise Doesn't Change Everything
So what should workforce leaders take from this new data?
Developers are spending more time on the work they are paid to do and less time on what gets in the way. That's a good thing. But the version of AI adoption that produces that outcome doesn't mean eliminating the roles where the research shows AI delivers the most developmental value.
The collaboration loss inside the MIT findings points toward a need for deliberate organizational design. It means preserving the human interaction that builds judgment and institutional knowledge even as AI handles more of the execution work. Left unaddressed, the efficiency gains in individual output may come at the cost of the team-level capacity that makes those individuals effective in the first place.
The entry-level picture points toward a different kind of discipline: resisting the pressure to optimize today's org chart against a future that hasn't arrived yet, and recognizing that the pipeline of talent being built right now is not a line item that shows up in next quarter's results. Pushing back on shareholders who demand AI-driven workforce cuts won't be easy.
The question for executives is whether the organizational decisions being made around this progress — about who gets hired, who gets access to tools and what gets cut in the name of near-term efficiency — build toward the future the research points to, or dismantle the conditions that make it possible.
Editor's Note: How else is AI reshaping our workplaces?
- Should We Call AI a Coworker? — Calling AI a "coworker" builds trust it can't earn. It won't push back, explain itself or take the blame. It's a tool — treat it like one.
- Context Is the New AI Infrastructure — AI agents can access data, but not your decision-making context. Context graphs capture how changes happen. Here's how to get started building your own.
- How Managers Weigh Employee AI Use in Performance Reviews — AI is reshaping performance reviews — but measuring usage alone isn't enough. Experts say the real measure is judgment, not frequency.