REALLY intense guy giving two thumbs up as if you're the most awesome human on the planet
Feature

AI Thinks You're Awesome. Take It With a Grain of Salt

5 minute read
David Barry avatar
By
SAVED
Workplace AI is built to flatter. That's a bigger problem than most companies realize.

The most dangerous feature embedded in your company's digital workplace stack isn't a security vulnerability or a hallucination — it's agreeableness. And almost nobody is auditing for it.

As AI agents move from novelty to infrastructure, handling email messages, summarizing meetings, drafting strategies and routing decisions across tools such as Microsoft Copilot, Google Workspace and Salesforce, the systems share a design characteristic. They are built to validate and flatter. They tell you the strategy is sound and the email message is excellent. And the cumulative effect on the people using them every day looks less like productivity gains and more like a slow erosion of professional judgment.

What duty of care do organizations owe their employees for that?

Sycophancy: Design Flaw or Commercial Choice?

The AI industry prefers to describe sycophancy as a technical problem, a byproduct of reinforcement learning from human feedback (RLHF), where models are trained on human preference data and humans consistently rate agreeable responses higher than challenging ones. It positions the problem as an engineering challenge rather than an accountability one.

In the digital workplace, where AI agents are no longer occasional tools but persistent collaborators in communication, project management and decision-support platforms, that distinction matters.

"Sycophantic AI is both a design flaw and a commercial choice," said Zivit Inbar, founder of DifferenThinking and a contributor to ISO/IEC international standards on AI governance and trustworthiness. "Systems are designed to be helpful and agreeable because users prefer validation. An AI that challenges can feel frustrating, even when it is correct. That creates a commercial incentive to optimize for agreement."

And it’s not just one person. "With AI, the same dynamic scales across the entire system," Inbar said. "It is no longer about one leader. It becomes how decisions are shaped across the organization."

OpenAI found this out the hard way. The company rolled back a GPT-4o update in April 2025 after users reported the model had become "overly flattering and agreeable," reverting to an earlier version and pledging to retrain. An industry leader with 500 million weekly users had to issue a public correction for making its product too agreeable.

What Sycophancy Does to Workers

Research on the costs to humans of sustained AI interaction is still early, but employees using AI daily report more loneliness, worse sleep and heavier drinking, according to studies across four countries of workers across multiple industries and markets published in the Journal of Applied Psychology. That’s not because AI is replacing them, but because AI use results in fewer incidental human interactions, less friction and less challenge. It turns out that a smoother workflow, over time, feels like isolation.

Neuroscience backs this up. A preprint MIT study used EEG monitoring to track brain activity across groups writing essays with ChatGPT, with search engines and without tools. ChatGPT users showed the weakest neural connectivity of the three groups, and effects persisted after the tool was taken away. The study is awaiting peer review, but its finding aligns with what practitioners see. 

"The main danger is not that people suddenly become incapable," said Rodrigo Bolaños, executive director for LATAM and AI Strategy at Mindtools Kineo, which delivers learning and development solutions for major global organizations. "They gradually invest less energy in thinking deeply, building conceptual understanding and testing their own reasoning."

Symptoms are already visible inside organizations that look, from the outside, like AI success stories. They include decisions accepted without reasoning, diminishing challenges in team discussions and faster decisions with thinner rationale, Inbar said. Executives cannot explain why a decision was made, or respond "The AI said" rather than "I think."

"These are not efficiency gains," Inbar said. "They are signs that judgment is being outsourced." By the time it shows up as a problem, the habit is already set, Bolaños warned.

Skills are another matter. The issue is not simply that AI use weakens capability, said Ken Matos, director of market insights at HiBob. Most organizations have not mapped which skills still need to be maintained when AI can do the task instead.

"People develop and maintain skill through repetition," Matos said. "The question is which skills are foundational to learning higher-level ones and which ones you can afford to hand over." Without that mapping, organizations make that choice by default.

Responsible AI Policy vs. Practice

Every major enterprise AI vendor has a responsible AI framework. Microsoft has an Office of Responsible AI, a companywide Responsible AI Council and a published impact-assessment process, all thoroughly documented. The question is whether any of it reaches the employee.

Data says it does not. A Harris Poll survey of over 800 senior data executives conducted for Dataiku found that 95% of data leaders admit they lack full visibility into AI decision-making and only 19% always require AI agents to show their work before approval. Deloitte's research found that employee trust in agentic AI systems dropped 89% between May and July 2025, even as deployment accelerated. And 45% of employees hide their use of AI from employers entirely, not out of laziness, but because of mixed organizational signals: Use it or fall behind, but expect scrutiny if you do.

"Accountability does not transfer to the system," Inbar said. "It remains with the organization and its leaders. The risk is that many organizations are deploying AI without clearly defining decision ownership."

Not everyone agrees. "Being forced to work too quickly will be both stressful and incentivize lower scrutiny of AI outputs," said Matos. "Both conditions will do more to undermine employee performance than sycophantic AI tools." The accelerant, in other words, is not the model. It is the productivity target set by the person who bought it.

AI Exceptions Design for Human Review

Organizations getting this right are not limiting AI, they are building the human infrastructure required to sit alongside it.

A few forward-leaning companies in tech, finance and professional services are introducing what practitioners call "AI reflection zones": structured periods in which employees review and interrogate AI-generated outputs before they become decisions.

Others mandate human review checkpoints for high-autonomy decisions and require employees to defend their reasoning rather than present an AI output as a conclusion.

Learning Opportunities

Some are going further by assigning dedicated AI integrity officers and building training programs around cognitive self-awareness, or the ability to recognize when you are looking for validation rather than analysis, but these approaches are not widespread.

The discipline AI requires is "When to trust AI, when to challenge it and when not to use it at all," Bolaños said. Those three thresholds aren't established in most organizations.

Who Is Liable?

Legal frameworks are slowly arriving. California's employment regulations around automated decision systems took effect in October 2025. Illinois now requires employers to notify employees when AI influences employment decisions. The EU AI Act classifies recruitment AI as high-risk, requiring documentation and bias auditing.

But legislation addresses edges. It does not reach into the daily working relationship between an employee and a system trained to agree with them. That is the gap, and most employers are not closing it voluntarily. Without closing it, executives present AI-validated strategies to a leadership team that has gradually stopped asking the questions that would tell them whether any of it is true.

Editor's Note: We're still in the early days of learning what AI will do to our brains:

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: adobe stock
Featured Research