trashcan filled with discarded drafts, one paper in mid-air heading to the trash
Feature

Your Company's AI Is Watching How You Think

5 minute read
David Barry avatar
By
SAVED
Companies deployed AI agents without thinking about what they recorded. Now they own a log of how every employee thinks — with no plan for it.

Your employer knows more about how you think than you realize. Every prompt you sent to the company's AI agent this week. Every revision you made. Every idea you explored and abandoned. It is all sitting in a log somewhere — and in most organizations deploying AI agents today, nobody has decided what to do with it.

The promise sold to business leaders was productivity and competitive advantage. What is accumulating alongside that promise is a detailed, durable record of how their employees think, and the people being recorded have no idea.

The Office Has a New Kind of Memory

For most of the history of workplace monitoring, employers saw what was visible: hours logged, tickets closed or email messages sent. They watched what people produced, not how they produced it. That boundary offered employees something meaningful: the cognitive privacy of the working process itself.

AI agents have eliminated that boundary as a side effect of how they function. "Email monitoring records what you sent,” said Collin Hogue-Spears, senior director and distinguished technical expert at Black Duck Software. “Agent telemetry records what you considered, what you rejected and why you changed your mind. With machine identities outnumbering humans at many organizations, agent logging now holds a frame-by-frame replay of how their people think, and most acquired that footage without deciding they wanted it.”

Traditional monitoring looks at outcomes, said Roi Carmel, chief executive officer and co-founder of Spotlight.ai. With the introduction of agent logging, managers can now see how people reach a decision — the prompts they tried, the options that were rejected, the hesitation in between. The problem is that most organizations never decided what they wanted to do with this data trail, nor told their employees it existed.

When the Log Becomes a Dossier

The shift from operational logging to workplace surveillance happens gradually, through a series of individually defensible choices, until an organization finds itself owning something it never set out to build: a detailed behavioral record of its workforce.

The legal and ethical lines are unambiguous, according to Alia Luria, a legal privacy expert and former software engineer. Operational logging became surveillance when its purpose shifted from system reliability to employee evaluation, discipline or productivity scoring, and the pertinent variable was not the volume of data captured but the intent behind accessing it.

"If an employer's logs could be used to track employee performance, the employer should have assumed employment privacy and labor law were in play and handled it like a formal monitoring program from day one,” Luria said.

Mundane examples were often the most revealing. Luria described an employee casually asking a workplace AI where to take their spouse for a wedding anniversary dinner, mentioning dietary allergies in passing. It was now part of the record, sitting alongside everything else that person had asked the system that day.

Multiply that across an organization of thousands, across months and years, and the accumulated data becomes difficult to dismiss, Luria added.

That includes courtrooms. Recorded reasoning trails are discoverable evidence in litigation because they proved what the organization knew and chose not to act on. Every agent's reasoning trace was a deposition waiting to happen: When a regulator asks, "Did you know?" and the logs hold a timestamped record of every alternative a team had considered and rejected, the answer is always yes.

Damage is rarely discovered in a single dramatic moment, said Faizel Khan, lead AI engineer at Landing Point. Instead, trust erodes gradually. When employees feel as if every action is being monitored, the consequences for workplace culture are severe and rarely recoverable. Organizations consistently underestimate the problem, because in its initial stages, everything looks fine on the surface.

How Observation Changes Behavior

When employees know that every prompt and revision is being logged, they optimize for clean-looking process rather than honest exploration, Carmel explained. 

"The biggest risk wasn't privacy violation," Carmel said. "It was that surveillance of thinking killed the thinking, and the innovation lost to that chilling effect was invisible by definition. You never saw the idea that was never tried." 

The first casualty is the bad first draft: the creative, inefficient attempt that produced breakthrough solutions, and the one least likely to survive in an environment where every revision is on the record, Hogue-Spears said.

"Once cognition feels exposed, people don't just work differently, they think differently," Khan said. "That shift shows up in innovation, engagement and retention faster than leadership expects. And the employees most likely to leave are the ones most capable of independent thought." 

Governance Before Deployment, Not After

This brings the problem back to where it started: not with the technology, but with the decisions organizations made before they switched it on, and the ones they never made at all. Most organizations deployed first and thought later.

Luria explained what closing that gap requires: Every deployment needs a business purpose, data minimization rules, defined retention limits and access controls that separated operational telemetry from anything that could be used in employee evaluation. Vendor contracts had to be specific: no training on company data, transparent subprocessor arrangements and tight incident notification obligations. If an employer's logs could track performance, even theoretically, the system should have been treated as a formal monitoring program from the moment it went live. 

Where Luria focused on policy architecture, Hogue-Spears focused on what sat beneath it. Governance built after deployment was damage control, not governance. CISOs and governance officers needed deterministic guardrails at the infrastructure layer, such as redacting individual identifiers from operational telemetry, mandatory aggregation thresholds and time-bound retention with cryptographic deletion. 

Governing agent telemetry with policy documents instead of technical controls amounted to building a surveillance system and asking people to promise not to use it.

On disclosure, those closest to these deployments agreed: a line buried in an onboarding document was not disclosure, but legal cover. Disclosure meant signals visible throughout the working day, not a clause the employee scrolled past on their first morning. Telling an employee an AI agent assisted them while recording every prompt revision and reasoning fork was, as Hogue-Spears put it, like posting a camera notice in a room wired for sound: they disclosed the visible system and buried the consequential one. 

Organizations running agentic tools must publish a telemetry transparency charter, itemizing what agent logs capture, who has access, their retention duration and a list of prohibited uses, reviewed quarterly by an authority outside the direct management chain. Build AI agents that log how people work without telling them, and you have built a trap, Carmel warned. 

Learning Opportunities

Editor's Note: What other unexpected side-effects are we learning about with agents?

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: adobe stock
Featured Research