Somewhere in your organization right now, a budget conversation is happening. A person is making the case for a new AI platform, a new AI workflow or a new way of automating a task that used to require people. The ROI slide seems compelling, and the efficiency gains look real.
Is anyone asking what happens to the people whose work is about to change? Probably not. That question sounds like a values or morals question, a philosophical debate that slows things down.
That framing, worker protection as a brake on productivity, dominates AI conversations in many organizations. It shows up in how investments get approved, how implementations get scoped and how the people most affected by these decisions end up finding out about them.
Taking care of your workers because it's the right thing seemed like a human-first approach that might not always make business sense.
The business case was always there. It just wasn't coming from sources that the people running the numbers would take seriously. Labor unions said it. Worker advocates said it. Progressive policy researchers said it.
But what happens when a think tank with big corporate sponsors waves a red flag? People should probably pay attention.
An Unlikely Institution as Pro-Worker Advocate
The Brookings Institution occupies a specific lane in American policy research. It advocates for broad regulation, which reads as center-left (at least in the U.S.). But spend time reading its labor research, and a different picture emerges.
When cities pushed for a $15 minimum wage, Brookings warned it would hurt the workers it was meant to help. Maybe a legitimate economic argument, but it gave cover to people who opposed any minimum wage increase because they didn’t get beyond the headline.
When researchers looked at unionization, Brookings highlighted a single study showing unions reduced employment and earnings, without giving comparable weight to the much larger body of research showing union workers earn more.
When states moved to reclassify gig workers as employees, Brookings called it problematic and pointed toward portable benefits instead, which happened to align with what platform companies were lobbying for.
None of those pieces was fabricated or factually wrong. They were defensible economic arguments that consistently landed on the side of their largely big, corporate sponsors. Amazon, Microsoft, Google, BlackRock, Goldman Sachs, JPMorgan Chase and Meta numbered among Brookings' leading corporate contributors in 2024.
Which is why Brookings publishing a paper calling for a more pro-worker approach to rolling out and implementing AI is remarkable.
Something changed. Part of it might be who is getting hurt.
Previous automation waves hit factory workers, logistics staff and low-wage service roles. But a 2024 Brookings report found that more than 30% of workers now face significant disruption to at least half their tasks. The workers most at risk are administrative coordinators, mid-level finance analysts, HR generalists, legal support staff and knowledge workers across professional services. Many of them are women. Many of them work inside the kinds of organizations that fund think tanks and fill the conference rooms where AI strategy gets set.
When the people getting hurt are visible to the people doing the research, the economic case for protecting them gets easier to make and easier to fund. That's just how institutions work.
Why Your Organization Is Probably Automating for the Wrong Reasons
The same dynamic operates inside companies. Incentive structures affect decisions, and most organizations making AI investment calls right now are inside one they haven't examined.
Think about how your company accounts for buying an AI platform vs. hiring someone. The software gets expensed or depreciated. The employee does not. That difference is incorporated into the U.S. tax code, and it means every time your finance team runs a build-vs.-hire comparison, the math already favors the technology before anyone has asked which choice produces more value over time.
Three MIT economists, Daron Acemoglu, David Autor and Simon Johnson, dug into how much that imbalance affects where AI investment is going. Their February 2026 paper, Building Pro-worker Artificial Intelligence, finds that the current push toward automation has less to do with automation being the most productive path and more to do with it being the easiest one to justify on paper.
In short, the incentive structure is doing a lot of the deciding.
The policy solutions these economists propose are worth knowing, not because your organization controls the tax code, but because they illustrate how deep the distortion runs.
- Rebalance the tax treatment of labor and capital so the build-vs.-hire math reflects actual long-run value rather than accounting rules written before generative AI existed.
- Create legal frameworks to prevent expertise theft, where AI systems get trained on the knowledge of workers who then get displaced by the tools built from their expertise.
- Mandate worker voice in AI governance, meaning the people whose work is changing have a seat in decisions about how it changes, not just a town hall after the rollout.
None of that is happening inside most organizations. This means the distortion the economists are trying to correct at the policy level is one to start correcting at the organizational level, if you're willing to ask harder questions before the approval goes through.
That context matters when you look back at the last AI investment your organization approved. That AI ROI case probably had real numbers in it. Those numbers were generated inside a system with a thumb on the scale.
A Framework for Figuring Out What Your AI Investment Does
The paper also provides something more practical: a way to categorize what any AI investment does to the work and the people doing it.
The economists identify five types, but three matter most for how you evaluate what's sitting in your AI implementation plan.
- Labor-augmenting technology makes workers more capable and builds on itself over time. A tool that helps a financial analyst model scenarios faster doesn't replace the analyst's judgment, but frees them to develop more of it. The analyst gets sharper. The organization gets more out of them next year than it did this year. The value compounds because humans using the technology keep getting better at using it.
- Automating technology replaces a task, delivers a one-time efficiency gain and removes the task from human hands. That sounds good until you account for what else disappears. The person who did that task also knew the exceptions, understood the context and carried institutional knowledge that surrounded the work. Automation delivers the efficiency gain upfront and books the knowledge loss later, usually when something breaks and nobody can explain why.
- Task-creating technology is the rarest category and the most valuable. These tools generate new kinds of work for humans to do, expanding demand for skill rather than cutting into existing skill. Spreadsheets didn't replace bookkeepers. They created categories of financial analysis. That's the category most associated with long-run economic growth, and it's the category current AI investment is least focused on.
Most AI spending right now falls into the automating category, driven by tax incentives and by short-term efficiency logic that looks good in theory.
For any AI investment, the question worth asking is: Does this make our people more capable over time, or does it replace what they do? If your team can't answer that, the business case hasn't been made.
The People Who Know Where This Goes Wrong Aren’t in the Meeting
Research on AI adoption also shows that the biggest productivity gains land at large organizations with proprietary data, deep technical infrastructure and the resources to customize implementation. If you're a mid-market company without those capabilities, ROI projections in the pitch deck were built on somebody else's numbers.
That gap shows up eventually, typically six to 12 months after launch, when adoption is lower than projected and nobody knows why.
People who could have explained it in advance are the ones whose jobs changed. The administrative coordinator who knows which exceptions the automated process can't handle. The finance analyst who understands why the data looks the way it does. An HR generalist who knows which managers will resist and why. They weren't consulted when the investment got approved, and in most organizations, they aren't consulted during implementation either.
Getting those people involved during planning, not just rollout, is where the business case and the human case for pro-worker AI land on the same answer.
You get better adoption, fewer surprises and a workforce that understands and trusts what got built. Organizations that treat worker involvement as a box to check during change management will spend the next few years fixing problems that didn't have to exist.
Why Is Pro-Worker AI a Business Strategy?
Because the alternative has costs that don't show up in the original plan.
Automating without understanding what you're removing costs you institutional knowledge. Deploying without involving the workers closest to the work costs you implementation quality. Approving investments inside an incentive structure without knowing its tax distortions costs you the ability to evaluate whether you're building long-run capability or booking short-run efficiency.
When you don’t acknowledge it, you get short-sighted decisions like Block laying off nearly half of its employees while it rakes in billions of profit off their work. It’s easy to tell employees to use AI and raise the performance bar. It’s another thing to make that a sustainable business practice.
Organizations that treat worker protection as a constraint on their AI strategy, or an either/or proposition, will make expensive decisions that look good in strategy meetings or for short-term shareholders but underperform in practice.
Ones that ask what this does to our people's capabilities, who needs to be in the room, and whether we're building or just cutting will end up with better tools, better adoption and people who use what got built.
Editor's Note: How should leaders be weighing the human-AI balance?
- Designing Human and Technical Architectures for AI-Powered Collaboration — When your data can talk, so can your people.
- Waking Up to Our Power: Digital + Human Capabilities for a Future-Ready Workforce — Beyond technical know-how, the future-ready worker needs a new blend of human and digital skills — anchored in awareness.
- From Tool to Teammate: How AI Is Rewiring People Strategy and What HR Can Do to Adjust — HR leaders see AI transforming work beyond automation — reshaping teams, culture and people strategy. The future is “human-engaged” work, not human-replaced.