silhouette of a person standing against a digital wall with a display that looks similar to computer code
Feature

The Trust Deficit in AI-Powered Performance Tracking

4 minute read
Mark Feffer avatar
By
SAVED
AI tools promise sharper performance insights — but when monitoring goes too far, it erodes trust, morale and culture faster than any metric can measure.

Managers love metrics. So, it’s no surprise they're turning to solutions, many using AI, to observe and analyze employee behavior and suggest actions to improve performance. However, using these programs risks harming morale, culture and trust.

Managers who use them should proceed carefully.

Employee Monitoring Isn't New, AI Just Adds a Twist 

Employers have long been interested in tracking their workforce’s behavior. But employees believe AI has made that monitoring more intrusive. 

Research from Cornell found AI — with its ability to analyze facial expressions, tone of voice and written communications in addition to tracking physical activity — make workers feel as if they have less control over their work than when people monitor them. 

“When artificial intelligence and other advanced technologies are implemented for developmental purposes, people like that they can learn from it and improve their performance,” said Emily Zitek, associate professor of organizational behavior at Cornell’s School of Industrial and Labor Relations. "The problem occurs when they feel like an evaluation is happening automatically, straight from the data and they’re not able to contextualize it in any way."

The Negative Effects of Using AI to Watch Employees

Employees are understandably resistant to being constantly monitored. Nearly one-third of workers (32%) monitored by their employers describe their mental health as “fair or poor,” according to the American Psychological Association (APA), compared with 24% who are not being monitored. Meanwhile, Pew Research found that 39% of Americans would oppose the use of AI to evaluate how well they do their jobs, 51% oppose monitoring of computer work and 61% oppose using AI to track workers’ movements.

Monitoring can also result in employees feeling like they’re little more than output machines. Agents may measure factors such as active screen time, app usage and keystroke frequency to report on productivity, which is fine to a point. But such data offers nothing on collaboration, creativity, planning or just plain thinking. When AI is used to monitor behavior, physical activity, vocal tone or communication, it reduces performance among workers, especially when they believe their autonomy is being threatened, according to the Cornell research.

Who can blame them? With some systems, employees who take a moment to check the news, take care of a personal issue or even stretch get tagged with being off-task or inefficient

Trust is also a concern — or should be. Monitoring that’s not transparent or uses intrusive methods such as webcams, biometric signals or constant location tracking sends the message that management doesn’t trust their employees. That, in turn, undermines openness and fosters resentment. 

Employers need to focus on transparency, detailing what they're monitoring, why and how they'll use the information. Clear boundaries are a must. Get these wrong, and you risk tilting your culture toward fear, with real repercussions. One in six employees have considered quitting their jobs because of monitoring they considered invasive, according to a 2025 ExpressVPN survey.

Understanding Human Behavior

Remote work further complicates the issue. Applications that track computer use, app usage or idle time may not take the idea of “work hours” into account. They might see a lunch break as a sign of inefficiency, for example. The idea that rest or brief informal interruptions is an issue pressures workplace culture and experience. More than half of employees being monitored feel tense or stressed at work, the APA reported. Others say they’re uncomfortable or worried about “spying.”

Then there’s bias and discrimination. Certain behaviors, voice analysis or emotional cues may unintentionally become concerns about disability or health status, pregnancy or protected personal traits. For instance, the U.S. Equal Employment Opportunity Commission has warned that making wearable devices mandatory runs afoul of laws such as the Americans with Disabilities Act. Using such data sources in performance or disciplinary decisions may lead to discrimination charges. 

The tools themselves may exhibit bias or blind spots. In a remote video, they may misread background noise as poor behavior, for example. They may penalize employees who spend more time typing because they do more thinking while they work. Unlike humans, algorithms don't understand nuance, context and variation. That leads to managers treating workers unfairly, causing damage that’s hard to repair. 

Clarity, Clear Guidelines and Methods of Appeal 

This undermines the very factors that help teams succeed: trust, psychological safety, creativity and human judgment. If a monitoring product’s algorithms and metrics don’t align with a job’s actual demands, employees may hide their behaviors or try to manipulate the system. They may avoid collaborating openly, stop admitting mistakes and avoid risks, for example. Performance in areas less measurable by data, such as mentoring others, for example, could suffer.

Regulators and lawmakers are keeping an eye on all of this. Measures such as Europe’s GDPR impose requirements on consent, necessity and transparency. Monitoring protocols that lean into biometric surveillance or persistent tracking may be illegal.  

For managers considering or already using AI to monitor their teams, the balance is delicate. Monitoring is a tool, but too much of it changes the dynamics of the workplace. The relationship between manager and employee shifts from one of shared purpose and trust to suspicion and too much attention focused on compliance.

Learning Opportunities

All this means employers must be clear about what will be monitored and why, how monitoring relates to meaningful outcomes, when employees are “off-duty” or outside monitored zones, what degree of human oversight exists and how review, appeal or correction takes place. Only when boundaries are respected, culture is preserved and people feel valued for more than metrics can organizations use AI to monitor workers without damaging human connections, creativity and trust.

Editor's Note: Read more about the dynamics that build — or harm — workplace culture: 

About the Author
Mark Feffer

Mark Feffer is the editor of WorkforceAI and an award winning HR journalist. He has been writing about Human Resources and technology since 2011 for outlets including TechTarget, HR Magazine, SHRM, Dice Insights, TLNT.com and TalentCulture, as well as Dow Jones, Bloomberg and Staffing Industry Analysts. He likes schnauzers, sailing and Kentucky-distilled beverages. Connect with Mark Feffer:

Main image: chris yang | unsplash
Featured Research