a deer, half hidden behind a boulder in the forest
Editorial

Nearly Half of Employees Hide Their AI Use. Here's What to Do About It

3 minute read
Karl Chan avatar
By
SAVED
When employees are uncertain about AI policies, they take the safer route rather than crossing an invisible line: They keep quiet.

When generative AI first arrived in the workplace, business leaders invested billions and expectations skyrocketed. Fast forward to today, and the conversation has shifted. Despite record spending, uneven results have led to Gartner describing AI’s decline in productivity returns as the “trough of disillusionment.”

Our own research adds another dimension to this workplace challenge. In a survey of 1,000 Americans, nearly half admitted to hiding their use of AI at work. Even more striking, one in seven actively avoid telling their managers, often out of fear of being seen as lazy, unoriginal or non-compliant. 

This culture of secrecy helps to shed light on why adoption feels stalled and why the expected productivity boost hasn’t materialized. If employees are keeping their use of AI tools underground, leaders can’t measure impact, share best practices, or provide the guardrails necessary to ensure safe and effective use. For executives looking to unlock the potential of AI, this presents a profound cultural challenge.

Why Employees Stay Silent About AI Use

The instinct to hide AI use is rooted in workplace psychology. Many employees worry they’ll be seen as cutting corners if they admit to using ChatGPT or Gemini to draft emails, brainstorm ideas or summarize meeting notes. Others hesitate because policies are vague or inconsistent: Our survey found that only one in three employees believe AI use in their workplace is well-regulated. Meanwhile, one in 10 even go so far as to describe their workplace as “the Wild West,” with no rules at all around AI use.

This uncertainty breeds fear. Rather than risk crossing an invisible line, employees take the safer route: They keep quiet. But silence comes at a cost. When half of the workforce is experimenting in isolation, organizations miss opportunities to harness collective learning, strengthen compliance and ensure the technology is being applied responsibly.

Overcoming the Cultural Barrier to AI ROI

Technology doesn’t create value in a vacuum; it requires adoption at scale, being integrated into workflows, with trust and transparency along the way. This mirrors what we’ve seen with past technology waves. In the early days of cloud computing, shadow IT became a major problem, with employees adopting unapproved apps to get work done faster, exposing companies to security and compliance risks. Today, we’re seeing the same behavior with AI: shadow AI use that bypasses policies and prevents organizations from getting the full benefit of their investments.

If employees are hiding AI use, how can leaders bring it into the open? Here are three critical steps:

1. Normalize the Conversation

Managers set the tone for whether AI feels like a sanctioned tool or a workplace taboo. If employees believe that using AI will be held against them, they’ll keep quiet. Leaders must actively normalize conversations about AI by asking teams how they’re using it, encouraging knowledge-sharing and framing responsible use as an asset. Treating AI like any other professional capability (something to practice, refine and share) helps transform it from a hidden habit into a source of collective strength.

2. Close the Policy Gap

Policies shouldn’t live in HR portals that people rarely read. They need to be accessible, actionable and consistently reinforced. Clear guardrails help employees understand what’s encouraged, what’s off-limits and where they can experiment.

Our survey shows that younger employees are far more likely to believe their workplace has clear AI policies (42% of Gen Z, 41% of millennials) compared to just 13% of Baby Boomers. That generational divide underscores the importance of communicating policies in ways that reach everyone, regardless of age or familiarity with the technology.

3. Turn Secrecy Into Advocacy

Every workplace has early adopters who are already using AI to work faster and smarter. Instead of allowing that experimentation to remain hidden, leaders should identify and call out these individuals as AI champions. By bringing experimentation into the open, organizations can capture best practices, spread them across teams, and build confidence in responsible use.

At the same time, providing training and support for those who feel anxious or resistant helps reduce fear. Not everyone needs to become an AI power user overnight, but everyone should feel they have permission to learn and try.

Learning Opportunities

Building a Culture of Responsible AI

AI is already transforming how work gets done. Organizations that fail to address secrecy and cultural resistance risk falling behind, while those that build trust and clarity will capture the true benefits. Leaders must focus less on the technology itself and more on the culture surrounding it. That means making AI a safe subject to talk about, closing policy gaps, and empowering employees to learn openly.

The future of AI in the workplace won’t be determined by technical horsepower alone. It will be decided by whether leaders create the trust and alignment needed for employees to bring their AI use out of the shadows. When secrecy turns into transparency, AI can fully become the productivity engine leaders have been waiting for.

Editor's Note: Read more about how to encourage AI use and experimentation in your workforce:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Karl Chan

Karl Chan is CEO of Laserfiche and an expert in aligning technology with business goals. Under his leadership, Laserfiche software evolved from a document management system to a full suite of content management and business process automation solutions. Connect with Karl Chan:

Main image: Leonardo Giannetti | unsplash
Featured Research