a person with a computer monitor where there head should be
Feature

Should We Call AI a Coworker?

4 minute read
David Barry avatar
By
SAVED
Calling AI a "coworker" builds trust it can't earn. It won't push back, explain itself or take the blame. It's a tool — treat it like one.

Somewhere in your organization, an AI agent is already in the room. It drafted the speech for the last all-hands. It summarized the pipeline review. It may have flagged something about your team's performance. Nobody on the team hired it, nobody can fire it and few people know exactly what it's reporting back or to whom. It might have a name, a friendly one, and a job title designed to make it feel like part of the crew. The question of whose crew it belongs to is one most organizations have not answered.

The Problem With Calling AI a 'Teamplayer'

"AI agents aren't colleagues. They're instrumentation," said Diptamay Sanyal, principal engineer at CrowdStrike, who has built these platforms at both HubSpot and CrowdStrike. "The 'coworker' framing is convenient marketing that obscures a fundamental reality. These systems were deployed by someone with objectives that may have nothing to do with the team they're operating in."

That someone could be IT, HR, a line manager or a vendor whose business model depends on engagement data. What stays constant across all those scenarios is the directionality, Sanyal said. "The team is the subject, not the beneficiary." The language of collaboration is dressing up something more one-sided.

The problem is more basic, said Kathryn Greenizen, director of strategic innovation and systems design at FLEX Partner. "The moment we start asking who an AI is 'loyal' to, we've already designed the system incorrectly,” she said.  AI is a tool and treating it as anything else creates misplaced trust that leads organizations to remove humans from the loop in the name of efficiency. 

The Problem With Trust

The coworker metaphor implies reciprocity: the ability to push back, challenge outputs or escalate a disagreement. None of that is available with an agent. "What most people are calling trust right now is just familiarity with the output, and those are very different things,” said Anusha Kovi, a business intelligence engineer at Amazon. Trust accumulates by watching someone handle something hard and through conflict and repair. A system with behavior encoded before it arrived cannot do any of that, and language that suggests otherwise makes employees less likely to question what these systems are doing.

None of that is inevitable. It's a choice. That's where governance must step in, but mostly hasn't. Organizations need to stop treating agent trust as a social question and start treating it as an operational one: confidence tied to specific workflows, defined permissions and clear mechanisms for correction and rollback, rather than a blanket belief in an agent's general competence, said Mine Bayrak Ozmen, co-founder at Rierino. The difference isn't semantic: it determines who is responsible when something goes wrong.

AI Workplace Surveillance: What Employees Aren't Being Told

When AI agents move into observational roles, that question becomes more urgent. When AI sits in on meetings, generates summaries and flags behavioral patterns, it changes workplace power dynamics in ways that are rarely disclosed to the people most affected. Most employees have no idea what is being logged or where it goes. Deployments are defined as productivity tooling rather than monitoring infrastructure, and the distinction rarely gets questioned.

Best practice requires explicitly informing employees what is monitored, why and how that data influences decisions about them, Sanyal said. Most organizations are not doing this. Having audit logs is different from employees knowing those logs exist, and "the observational role is the most serious question organizations are ignoring," he said.

Every employee should be able to answer at minimum five questions about any agent operating in their environment, Ozmen said: 

  1. What is captured?
  2. Where it is stored?
  3. Who has access to it?
  4. How long does the record persist?
  5. What decisions may it influence?

Once AI outputs influence performance reviews or resourcing decisions, the accountability question becomes more important.  The answer to who owns the outcome, in most organizations, is nobody. IT deployed the tool, HR owns the process, the line manager made the call and accountability gets distributed so broadly it disappears, Sanyal said. 

Sanyal’s proposed fix is unglamorous: role-based access control tied to clear policy, with defined rules governing who sees what, who can act on it and what the appeal mechanism looks like when AI-generated insights make decisions about people.

The problem gets worse when multiple agents, deployed by different functions, operate around the same employees simultaneously. An IT security agent looks for threats. An HR agent looks for productivity signals. A line manager's agent looks for output metrics. All three are generating intelligence about the same individual, with no reconciliation layer between them. The result is contradictory assessments of the same employee from systems that don’t talk to each other, Ozmen warned.

The Unglamorous Work of Getting it Right

If nobody has been assigned to resolve that, are organizations ready to be deploying these systems? After deploying 20 digital workers across his organization, Peter-Paul Schreuder, chief cloud officer and VP of support at Ultimo, said the focus on closing the skills gap is the wrong priority. Teaching someone to use an AI tool takes about an hour. Teaching an organization to rethink how work gets done takes four to five months of hard, unglamorous process work, but delivers more value.

At Ultimo, no agent was deployed until four months of workflow mapping had identified where information got stuck, which tasks took the longest and where knowledge lived only in someone's head. The payoff in January 2026 alone was more than 3,700 hours reclaimed across the organization: not from better prompts, but from better processes. If you don't understand your workflows, you have no basis for defining what an agent should and shouldn't do inside them.

The real risk isn't that AI agents become uncontrollable, but that organizations anthropomorphize them and remove human judgment from the loop, one workflow at a time, before anyone has asked whose interests these deployments serve, Greenizen said.

When you look past the “coworker” branding and examine how these systems are built, who sets the objectives, who owns the outputs and who carries the accountability, the answer is rarely the teams.

Learning Opportunities

Editor's Note

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: adobe stock
Featured Research