Nearly every leader and organization is trying to figure out the same thing: how to generate returns on their AI investments.
Unlike previous technology shifts, AI is being deployed across nearly every role and function before most organizations have learned how to create value with it, which means leaders and teams are using trial and error to try to generate ROI.
Worldwide AI spending is projected to hit $2.5 trillion in 2026, up 44% year over year.
According to Deloitte’s 2026 Global Human Capital Trends research, 59% of organizations are layering AI onto existing systems and processes without changing how humans work. These organizations are 1.6 times more likely to fail to exceed their AI ROI expectations.
While 66% of leaders say that intentional design of human-AI interactions is important to organizational success, only 6% have made the shift to redesigning how humans and AI interact and the roles each play. These firms are 2.5 times more likely to report better financial results and twice as likely to exceed their AI ROI expectations.
Redesigning Work for ROI
Redesigning work for AI ROI means addressing three things:
- What decisions, judgment calls and tradeoffs must humans make, and which can AI handle effectively?
- Which goals, roles and skills streamline the new division of labor to produce better, faster results?
- What’s more important for humans to do with the time AI frees up?
A European telecom company added an AI assistant to customer service without changing how agents worked, productivity improved 5%. When they spent 90% of the rollout budget redesigning the work, including new workflows, clear guidance for when an AI answer should get pushed up to a human, when to trust the AI and when to override it, and real training on how to make that call, productivity improved 30%. That’s a 600% better result with the same AI.
A 2026 field experiment by INSEAD and Harvard Business School gave 515 startups the same AI tools. One group also received case studies on how companies reorganize their workflows, teams and business models around AI. The other got general entrepreneurship workshops. Ten weeks later, the companies that had rethought how work was structured were 18% more likely to have paying customers and generated nearly twice the revenue of their identically equipped peers.
The 6 Human Leadership Skills Required for AI ROI
As AI handles more routine work, human judgment, strategic thinking, communication, trust-building, adaptability and creativity become more important. People will still have to ask better questions, interpret results, guide machines, decide what can be trusted and determine what good work requires.
Every few weeks, AI gets better at producing outputs that sound convincingly correct. And as AI multiplies the volume of communication moving through an organization, human clarity becomes more valuable. Leaders and teams have to spot hallucinated facts, weak reasoning, missing context, generic recommendations and outputs that sound polished but are strategically wrong. Humans must decide what matters, what tradeoffs are acceptable and what standard the work has to meet.
When people can’t evaluate what AI gives them, they stop using the skills that matter most, such as curiosity, because an answer is already provided. They stop thinking independently because options are already generated and they stop having hard conversations because AI has already produced a convincing-sounding recommendation. These skills atrophy because AI convinces people they aren’t necessary.
AI must be used in ways that improve human thinking instead of replacing it.
The highest-value use case isn’t “let AI do the work,” but to accelerate research, compare options, pressure-test assumptions, understand complexity, generate first drafts and expose blind spots while humans remain responsible for judgment, tradeoffs, ethics, quality and final decisions.
The rubric below maps the six skills required for AI ROI across three performance levels. Use it to assess where your teams are now and where targeted development would produce a higher return.
| Skill | Under-Performers | Most Leaders | High-Performers |
|---|---|---|---|
| Judgment (and knowing when AI is wrong) | Accept AI output as the answer and paste it into deliverables without a critical read. Send AI recommendations without weighing what’s at stake or whether it’s the right call. Avoid asking where the data came from, what the model’s source material is or whether the answer matches what’s true. | Spot-check facts, but most accept what AI gives. They can prompt effectively but can’t tell when the output is confidently wrong. They treat AI output as good enough. | Consistently catch the difference between AI output that’s right and output that just sounds right. They flag hallucinated facts, weak reasoning, missing context and answers that sound polished but are strategically wrong. They separate the calls AI can handle from the calls that require human discretion, and weigh the consequences before acting. |
| Strategic thinking | Ask AI questions rather than thinking for themselves. Build plans based on AI summaries. | Some independent reframing happens, but teams default to the framing AI or the loudest voice produced. People build on what’s there rather than questioning whether the frame is right. | Ask “what isn’t here?” before they accept what is. They connect AI output to business goals, customer signals and the political context AI can’t see. The best decisions usually come from challenging the question, not answering it. |
| Communication | AI drafts go out essentially unedited. The message hits the inbox but the recipient doesn’t act on it because it doesn’t sound like a human wrote it. | People rewrite the AI draft for tone but don’t reshape it for audience, decision or context. The message is technically clear but doesn’t inspire action. | Leaders treat AI drafts as raw material. They cut what doesn’t matter, name the tradeoffs, and tell people what to do with the message. People understand it, trust it and act on it. |
| Trust-building | Teams use AI quietly because they’re afraid of being measured against it, replaced by it or blamed when it gets something wrong. People hide what they’re using AI for. | Leaders encourage AI use but haven’t said what’s safe to fail at, what gets reviewed or what happens when AI produces a problem. People experiment cautiously and don’t surface the errors. | Leaders make the rules of AI use explicit: what to try, what to flag and what is high-stakes enough to require human sign-off. They create the conditions where people surface mistakes early. AI adoption happens in the open. |
| Adaptability | When AI changes a workflow or makes a role redundant, teams wait for someone above them to figure out what to do. Adaptation becomes slower and more difficult. | Teams adjust when the change is overwhelming, but the pivot usually comes after the damage is visible. Leaders manage change in announcements, not in capability. | Leaders help teams keep learning as the tools and tasks change. They reassess plans when new information arrives, communicate the change clearly and shift resources without burning out the people doing the adjusting. |
| Creativity | Teams use AI to remix what they already do. Innovation drops because the easiest path is to ask AI for an answer. | Some creative work continues, but AI’s option set tends to anchor the discussion. People build on what AI generated rather than imagining what AI didn’t. | Leaders use AI to widen the option set, then bring human judgment to which ideas are worth building. They make time for the kind of thinking AI can’t do. They ask better questions and seek what is missing. |
These six skills help your teams become more capable of using AI effectively and producing a return on your AI investments without lowering their standard of thinking, decision-making, communication, or accountability.
Coaching question: How much of your AI budget goes to tools and technical training, and how much goes to redesigning work and developing the human capabilities that determine whether those tools produce value or waste?
Editor's Note: How else can leaders guide employees to make the most of their capacity and skills?
- In AI We Trust. Or Not. The Next Frontier of Work — Don't trust the AI — trust the systems you build around it. Like pilots and doctors, confidence comes from training, checklists and knowing when to stop.
- AI Can Give HR the Answer. Can HR Tell if It's the Right One? — AI can generate HR insights in seconds, but without data literacy, teams won't know when to challenge the output. Upskilling and AI adoption must grow together.
- The Difference Between Sophisticated and Routine AI Users — Heavy AI users aren't the best AI users. KPMG research found what separates the top 5% from everyone else clicking "generate."
Learn how you can join our contributor community.