cat next to a puppy, puppy looking up at the cat who looks smug. one is not like the other
Editorial

Digital Literacy vs. AI Literacy: Why Your Organization Needs to Know the Difference

5 minute read
Sharon O'Dea avatar
By
SAVED
Digital literacy and AI literacy aren't the same thing. Yet organizations are building one and calling it the other.

Ninety percent of workers now use AI tools at work. Most of them don't trust what those tools produce.

And more than half of them aren't telling their employers they're using them at all.

According to MIT's 2025 Project NANDA report, only 40% of organizations have purchased official AI subscriptions — yet employees at over 90% of companies are using personal AI tools for work anyway. They're not waiting for the Copilot rollout. They're using their own accounts, on their own devices, quietly getting on with it.

The organizational response to this, naturally, has been: more training.

Book the Copilot workshops. Roll out the prompt engineering course. Tick the completion boxes and declare the workforce AI-ready. It's tidy, it's measurable and it almost entirely misses the point.

Because what most organizations are building is digital literacy — the ability to operate a new set of tools — and calling it AI literacy. They are not the same thing. And in that gap is the real risk.

Digital Literacy: Necessary, But Insufficient

Digital literacy is the foundation. It's the ability to navigate a technology-mediated workplace — to know which tool does what, to communicate across channels with some degree of competence, to find information without having to ask someone who sits three desks away. Most organizations have been building it for years, even if they haven't been calling it that.

It matters. But it's table stakes. And here's the awkward irony: just as organizations are waking up to its importance, the tools themselves are getting easier to use. Interfaces are more intuitive. You don't need to understand how something works to make it work. The technical bar is dropping — even as the number of tools keeps rising.

Which means digital literacy is increasingly about breadth rather than depth. Managing the sprawl. Useful, certainly. But it has almost nothing to do with what organizations actually need from their people right now.

AI Literacy: A Different Kind of Capability

AI can help all of us cut through that complexity — if you know what you're doing.

And that's the problem. Because knowing what you're doing with AI has very little to do with knowing which buttons to press. The tools are, by design, easy to use. What's hard — what requires genuine capability — is knowing when to trust them, when to push back, and when to close the tab and do it yourself.

That's AI literacy. Not tool proficiency, but judgement.

It means understanding what AI can't see: context, consequence, the thing that wasn't in the prompt. It means being able to interrogate an output rather than just accept it. It means retaining accountability for decisions that AI has had a hand in shaping — which, in an era of one-click content generation and automated comms, is increasingly all of them.

Remember the study the article opened with that found workers don’t trust AI’s output? It turns out that mistrust doesn’t translate to action. Researchers have studied the phenomenon — it’s called automation bias. What a growing body of research has found is people defer more to advice when they know it’s created by an algorithm than a human, in spite of expressed skepticism about AI. 

Ninety-two percent of people don't check their AI outputs for accuracy. It's not carelessness — it's that AI presents information with a confidence that makes checking feel redundant. That's a problem at the best of times. When those outputs are driving decisions, it's a serious one.

When Digital Literacy Masquerades as AI Literacy

The conflation isn't just a semantic problem. It has consequences.

When organizations treat AI literacy as a tool-proficiency challenge, they design the wrong interventions. They run adoption campaigns instead of building critical capability. They measure usage — logins, prompts submitted, time saved — and conclude that the job is done. Meanwhile their people are generating content they can't evaluate, automating decisions they don't fully understand and moving faster than their judgement can keep up with.

For internal communicators, the stakes are particularly high. IC teams are increasingly being asked to lead on AI adoption across their organizations — to model good practice, set the tone, develop guidance for others. That's a significant responsibility to hand to people who've been trained to use a tool, not to think with one.

And the irony is that the skills AI literacy actually demands — critical thinking, contextual judgement, knowing when not to automate — are precisely the skills that unreflective AI use tends to erode. The more you accept outputs unchecked, the less practiced you become at questioning them.

What AI Literacy Looks Like in Practice

The good news is that AI literacy is buildable. But before reaching for a capability framework, it's worth being realistic about where most people are starting from.

Consider Excel. It's been a workplace staple for four decades. And yet research suggests that around 90% of usage involves only basic functions — sums, sorts, simple formulas. The advanced features exist. Most people never touch them. Not because they're incapable, but because nobody gave them a good reason to, or a path to get there.

AI is going to follow the same pattern — unless organizations are deliberate about it. The goal shouldn't be to make everyone an expert. It should be to move people forward from wherever they are, and to make sure the basics are genuinely understood rather than just assumed.

Learning Opportunities

A simple four-level framework from our forthcoming book, "Digital Communications at Work," helps. At the foundation level, that means establishing comfort and skepticism in equal measure — understanding what AI can do, using it with oversight and knowing it can be confidently, fluently wrong. This is where most people need to start, and where most organizations need to spend more time.

From there, practitioners develop the habit of evaluating outputs before acting on them. They make deliberate choices about when AI earns its place in a workflow, and when it really doesn't.

Advanced users go further — designing AI-assisted processes, keeping human judgement in the loop by design rather than accident and helping others build the same muscle.

Expert level — shaping organizational AI strategy, governing risk, setting standards — matters enormously. But not everyone needs to get there, and that's fine. Most people don't need to be advanced Excel users either; they do need to do more than reach for =SUM. The same logic applies here. The question isn't whether your whole organization can reach Expert — it's whether your foundation level is actually solid. In most organizations, it isn’t even close.

Start With the Question, Not the Tool

The adoption/trust paradox isn't going away on its own. Organizations will keep rolling out tools, completion rates will keep getting reported upwards, and the gap between using AI and thinking with it will only widen.

The fix isn't complicated. It just requires asking a different question. Not how do we get our people using AI? — most of them already are. But: do they know what to do when it gets it wrong? Has anyone told them that's part of their job?

AI literacy isn't a more advanced form of digital literacy. It's a different capability entirely — built on judgement, skepticism and the confidence to push back on a tool that sounds far more certain than it has any right to be.

In an age of one-click content generation, the difference between a good output and a convincing one matters enormously. Right now, most people can't tell them apart. That's not a skills gap. That's a judgement gap. And you can't close it with a Copilot workshop.

Editor's Note: What other capabilities must we protect as we continue to adopt AI? 

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Sharon O'Dea

Sharon O’Dea is an award-winning expert on the digital workplace and the future of work, founder of Lithos Partners, and one of the brains behind the Digital Workplace Experience Study (DWXS). Organizations Sharon has collaborated with include the University of Cambridge, HSBC, SEFE Energy, the University of Oxford, A&O Shearman, Standard Chartered Bank, Shell, Barnardo’s, the UK Houses of Parliament and the UK government. Connect with Sharon O'Dea:

Main image: Andrew S. | unsplash
Featured Research