a team training session
Feature

AI Prompt Training Is a Start. Now Make It Strategic

5 minute read
Lance Haun avatar
By
SAVED
Companies investing in AI training should be commended. But if that training focuses on the mechanics without strategy, it delivers limited long-term value.

Imagine handing every employee a high-performance drill, spending 30 minutes teaching them how to change the bit and then wondering why nothing is still getting built.

That's roughly where we are with enterprise AI training.

Citi recently announced mandatory prompt engineering training for its workforce. Thirty minutes for beginners, without much value for those already using it with a moderate level of success. 

While this may sound belittling, it’s not. This is a genuine investment because of the scale of Citi’s business. It’s in line with the companies investing the most in AI training. JPMorgan Chase has rolled out similar programs. So has MastercardThe American Bankers Association runs workshops on prompt engineering for banking professionals. Outside of the financial services sector, the federal government has trained tens of thousands of employees through the General Services Administration.

These programs matter. They signal commitment, create baseline AI literacy and normalize AI use. Early evidence suggests training delivers measurable results when done well. 

But here's the problem: Most of these programs just teach mechanics without strategy. They answer "How do I write a better prompt?" but skip the harder questions. Which tasks should AI handle? Where do humans need to stay in control? What does success look like, and how will we measure it?

That's not just a Citi problem. It's an industry-wide gap, and it explains why billions in AI spending still hasn't moved the productivity needle.

The AI Spending Problem No One Wants to Name

Companies are pouring money into AI at near arms-race levels. A 2024 BCG report found that enterprise AI spending continues to accelerate even as organizations struggle to demonstrate clear return on investment. The gap between investment and measurable outcomes keeps widening. Adoption stays low. Productivity gains remain elusive.

According to industry research from late 2024, nearly 40% of enterprises report they don't have broad enough AI expertise to meet organizational goals. That's a strategy problem, not a prompting problem.

Employees struggle because they don't know when to use AI, where it fits in their actual workflows or why it matters to the work they're measured on. Prompt training without that context is teaching someone to use a drill without telling them what they're building.

This is why getting the strategy piece right matters. Without it, hype won't turn into reality. But how do organizations get started?

A Framework or Just Better Questions?

If you're ready to think structurally about where AI fits, I wrote about the Human Agency Scale this summer, which offers one research-backed approach to AI strategy. 

Developed through Stanford research in 2024, it maps five levels of human-AI collaboration. At one end is full automation where AI handles repetitive work like data transcription. At the other are human-essential tasks such as negotiations, where AI stays minimal. In the middle are equal partnerships for tasks such as strategic planning where both human and AI bring something the other can't.

The insight from Stanford's research is that organizations and employees need the right balance for each task type. The framework forces the conversation away from "should we use AI?" and toward "where does AI make sense, and where does it create more problems than it solves?" 

Some organizations are ready to adopt something like this. If you are, do it. If you're not there yet, start with these diagnostic questions before you train anyone:

  • Ask your teams: "Name three tasks where AI would measurably improve your work." If they can't answer, you haven't given them clarity.
  • Ask managers: "Which bottlenecks could AI address, and which ones need human judgment?" Managers are your adoption layer. If they can't articulate the use case, employees won't discover it on their own.
  • Ask leadership: "What does success look like, and how will we measure it?" Completion rates for training modules don't count. Real success looks like faster cycle times, measurable quality improvements, documented time savings or demonstrated cost reductions in specific areas.

If people across these levels can't answer clearly, the best prompt training in the world won't help. You need clarity first.

Why Some AI Training Programs Succeed While Others Fail

Research shows what happens when organizations roll out AI training without this foundation. Programs are often too narrow, focused on specific tools rather than adaptable skills. They're delivered in short bursts, resulting in poor retention and limited long-term value.

Programs that succeed tie AI to real work. The American Bankers Association's multi-week workshops work because they embed AI into banking practices such as fraud prevention and credit risk assessment. Government programs that show measurable impact do so because they start with the work, not the tool.

Programs that fail share common patterns: no clarity on which workflows AI should improve, no governance around acceptable use, no measurement framework for productivity, no incentives for adoption and cultures that punish time spent learning, failing and innovating.

Once you understand where AI fits, next steps become obvious. Here's what separates organizations that get value from their AI investments from those that just generate press releases:

  • Start with problems, not tools. Identify three to five high-value use cases tied to measurable business outcomes, then train people to execute them. Don't pick use cases because they're trendy. Pick them because they solve a documented problem your team already understands.
  • Integrate, don't bolt on. Embed AI into existing workflows instead of treating it as a separate tool. If people have to leave their normal work environment to use AI, they won't. Integration beats novelty every time.
  • Train the coaches, not just the players. Train managers to coach adoption, not just employees to prompt. Managers determine whether people have permission to spend time learning, whether experiments are celebrated or punished and whether new approaches get reinforced or abandoned.
  • Build feedback loops. Measure what's working and eliminate what isn't. Track actual usage and outcomes. Be willing to cut tools that aren't earning their cost.
  • Make it continuous. AI capabilities are evolving too fast for static training. Effective upskilling should be tailored, regularly updated and connected to real work, not delivered in one-off modules.

None of these steps requires a massive budget or a complete organizational overhaul. What they require is intentionality about turning training investment into strategic advantage.

The Real Test of AI Productivity

Citi is doing what most companies aren't: putting real money and organizational muscle behind AI readiness. That deserves recognition. In an environment where most organizations are still debating whether to invest in training at all or just throw people into AI tools, committing resources and making it mandatory is a meaningful step.

Learning Opportunities

The real opportunity lies in connecting that AI training investment to strategic clarity, though. Teaching people to write better prompts matters when they know which problems to solve, which workflows to target and how success gets measured. Without that context, you're teaching navigation without providing a destination.

The training is a foundation. What matters is what you build on it. Organizations that use training as a starting point to ask harder questions about where AI fits, where it doesn't and how to measure the difference are the ones that turn hype into reality. The ones who think prompt training alone gets them to a productive, AI-enabled workforce are going to be disappointed.

Editor's Note: What other considerations should influence AI training?

About the Author
Lance Haun

Lance Haun is a leadership and technology columnist for Reworked. He has spent nearly 20 years researching and writing about HR, work and technology. Connect with Lance Haun:

Main image: sigmund | unsplash
Featured Research