unhappy egg while another egg gives it side eye
Editorial

AI Literacy Isn't the Problem. Exclusion Is.

6 minute read
Sue Duris avatar
By
SAVED
One big reason is behind the failures of many AI initiatives: organizations are making assumptions about AI adoption without creating the conditions for it.

I want to start with a question that tends to make leaders uncomfortable: when you rolled out your last AI tool, who decided?

If the answer is some version of "IT, procurement and senior leadership" — with employees looped in somewhere around the training session — you're not alone. But that sequence is precisely why so many AI implementations stall, underperform or quietly die after the pilot phase. And the reason organizations don't see it coming is that they're diagnosing the wrong problem. They call it resistance. It’s actually exclusion.

That distinction matters as organizations work to close the gap between digital literacy — knowing how to use digital tools — and AI literacy, which goes further. AI literacy means understanding how AI systems work, what they can and can't do, how to question their outputs and how to recognize when something has gone wrong. You can't build that kind of literacy as an afterthought. It must be baked in from the start.

Two Deployment Scenarios — And Why Both Get It Wrong

Not all AI lands in the workplace the same way, and the path it takes shapes the ownership problems that follow. Two scenarios have become dominant — and they create opposite but equally serious challenges.

The first is the embedded tool. Microsoft Copilot is the clearest example. When an organization activates Copilot across Microsoft 365, it shows up inside the apps employees already use every day. On the surface, this looks frictionless. Adoption metrics look healthy because the tool is already in the workflow.

But embedded doesn't mean understood. Governance decisions — what data Copilot can access, what it should and shouldn't be used for, how outputs should be verified — are typically made at the IT and procurement level, well above the people who will use the system. The employee who’s being asked to use it didn't shape those decisions. They may not even know they exist. Usage happens. Genuine ownership does not.

According to a 2025 Cornerstone survey, only 44% of U.S. employees have received AI training and tools and one-third of employees who are actively encouraged to use AI don’t receive any AI training. That's not a training gap. That's a strategy gap.

The second scenario is the rogue tool. Some organizations take a more informal approach — encouraging individuals to explore tools like Claude, ChatGPT or Gemini on their own, without any formal integration or governance. The intention is usually to build grassroots familiarity. In practice, it creates fragmentation.

Employees develop their own relationships with AI tools in isolation. There's no shared standard for how prompts are constructed, what data can be shared with external models, or how outputs should be treated. People are feeding sensitive client and business information into external large language models without understanding what that means for data security or confidentiality. What looks like individual empowerment is a governance gap waiting to become a liability.

A recent study by BlackFog showed the growing risks of “Shadow AI” — employees using unapproved AI tools in the workplace. Over half (58%) of survey respondents admitted to bringing their own tools to work — unsanctioned, untracked, outside any governance framework. That's not enthusiasm. That's a vacuum left by leadership.

The common thread in both scenarios? The organization made assumptions about adoption without creating the conditions for it. The tool arrived before the understanding did.

Employees Aren't Just Feedback Providers — They're Strategic Inputs

The most significant mindset shift organizations need to make is around what employee involvement means. The standard model treats it as a change management problem: deploy first, communicate second, train third, gather feedback eventually. Maybe.

What gets lost in that model is that frontline employees and functional teams often know things that procurement teams and technology leaders don't. They know which tasks are genuinely automatable and which just look that way from the outside. They know where the edge cases are. They know which tools would solve a real problem versus a theoretical one. And here's the part that gets consistently overlooked: they may already be using tools that the tech team hasn't considered — and those tools might be exactly the right ones.

Bringing employees in early — not as a box-ticking courtesy exercise, but as genuine strategic input — changes the quality of the decisions made upstream. Tech teams that work closely with HR, operations and frontline functions before tool selection is finalized are far more likely to choose tools that map onto actual workflows rather than idealized versions of them. They identify governance risks before they become incidents. They build the internal credibility that makes rollout smoother. And they understand that training will determine whether adoption sticks

This is not a soft benefit. It's a structural advantage.

A January 2025 McKinsey survey of employees found that nearly half say formal training is the single best way to boost AI adoption. Not communications. Not executive sponsorship. Training. And yet it remains the last thing most organizations invest in.

What Explicit Training Actually Changes

For organizations running Copilot or similar embedded tools, the temptation is to assume the training burden is low because the interface is familiar. This mistake consistently undermines realized value.

Copilot in Word is still Word. But prompting it effectively, knowing when to trust its output, understanding what data it draws from and what that means for confidentiality — those are AI literacy skills. As is the understanding of when not to use AI. An employee who is digitally fluent but AI-illiterate will under-use the tool, misuse it or avoid it entirely. The familiar interface masks an unfamiliar capability.

Explicit, structured training for embedded tools needs to cover at minimum: what the tool can and cannot do, how to build effective prompts for your specific role, the governance rules that apply and how to critically evaluate what comes back. This is not a half-day onboarding session. It's a foundation that needs reinforcement as the tool evolves.

The organizations seeing the strongest Copilot adoption are not the ones with the fastest deployment timelines. They're the ones that invested in role-specific AI training before and after launch — and that built feedback loops so employees could flag where the tool wasn't working as expected. That loop, employee experience shaping ongoing governance, is what genuine ownership looks like.

Building AI Literacy That Sticks

Effective AI upskilling isn't a single event. It’s a structural commitment, and it works best when it's explicitly tied to the governance frameworks guiding AI use.

In practice, that means a few specific things.

Learning Opportunities

Audit before you train. Understand where AI literacy sits across your organization — not just who has used AI tools, but who understands how to evaluate outputs, flag risks and work within governance boundaries. A skills audit reveals readiness and prioritizes where to focus.

Make training role-specific. A finance team member using Copilot to summarize reports needs different AI literacy than a customer service rep using an AI routing tool. Generic training rarely changes behavior.

Connect training to governance explicitly. Employees need to understand not just how to use a tool, but why the guardrails around it exist. Governance that's understood is governance that's followed.

Create feedback channels before launch, not after. According to Salesforce's global research across 14,000 workers, nearly seven in 10 employees have never received any training on how to use generative AI. When organizations couple that training deficit with no formal channel to surface concerns, they should not be surprised when adoption stalls.

And treat AI literacy as ongoing, not one-and-done. These tools are evolving faster than any fixed curriculum can keep pace with. Building habits of critical evaluation matters more than any single training module.

The Bottom Line

The organizations struggling most with AI adoption aren't the ones that chose the wrong tools. They're the ones that chose tools in the wrong order — selecting and deploying before the people who would use them had any meaningful say.

Closing the gap between digital literacy and AI literacy isn't primarily a technical challenge. It's an organizational one. It requires bringing employees into strategy conversations early enough that their input shapes decisions rather than just reacting to them. It requires treating training as a precondition for deployment, not a supplement to it. And it requires recognizing that what gets labeled resistance is often something simpler: people who were never given a reason to care.

Give them one early enough — and a real seat at the table when it matters — and the adoption problem largely solves itself.

Editor's Note: What else helps with AI adoption?

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Sue Duris

Sue Duris, MBA, CCXP, is a strategic customer experience and business transformation leader with more than 15 years of expertise driving growth through customer-centric frameworks. As Principal Consultant at M4 Communications, she specializes in building CX programs from the ground up, transforming how organizations engage with customers while driving retention, advocacy, and revenue growth. Connect with Sue Duris:

Main image: Nik | unsplash
Featured Research