figurines of construction workers removing a delete button from a keyboard
News Analysis

Claude Cowork Is a Productivity Test Enterprises May Fail

7 minute read
David Barry avatar
By
SAVED
Claude Cowork can organize files and write reports on its own. Enterprises still have to figure out who’s responsible when it gets things wrong.

When Anthropic released Claude Code in February 2025, the expectation was developers would use it for coding. They did, but then something unexpected happened. Developers began using it for almost everything else. Anthropic took note, and built Cowork as a result: a simpler way for anyone, not just developers, to work with Claude in the same way.

Cowork launched as a research preview for Max subscribers on Jan. 12, Pro subscribers on Jan. 16, expanding to Team and Enterprise plans on Jan. 23. It is currently available only through the Claude desktop app on macOS.

Table of Contents

How Cowork Operates

Users grant Cowork access to a chosen folder on their computer, where it can read, edit or create files. Cowork can then reorganize downloads, create expense spreadsheets from screenshots or produce first drafts of reports from scattered notes. Once given a task, Cowork creates a plan and then steadily operates until completion, looping users in on progress throughout.

Advanced users can make Cowork more powerful by tapping into existing connectors that link to external information. Anthropic has added an initial set of skills to improve Claude's ability to create documents and presentations. When paired with Claude in Chrome, it can complete tasks requiring browser access, too — although the company strongly advises against working with sensitive documentation in this mode.

The experience feels fundamentally different from traditional chat interfaces. Users can queue multiple tasks and let Claude work through them in parallel. It feels less like a back-and-forth conversation and much more like leaving a message for a coworker.

A Research Preview With Potential

Citrix VP, Technology Officer and Futurist Brian Madden speaks regularly with customers about digital workspaces and has been extensively testing Cowork over the past few weeks. "Claude Cowork shows amazing potential, but the reality is that it's just a 'research preview,' and it definitely lives up to that label. Connectors break, files get destroyed, and sometimes it randomly abandons tasks or skips steps," he said.

Cadre AI uses the full suite of Claude products to run its operations. Founder and chief AI officer, Chad Lohrli shares a more optimistic view. He's seen people process years of cluttered files in minutes, build slide decks and reports without constant prompting, and analyze hundreds of documents to extract insights that would have taken days manually. 

The key difference is asynchronous delegation: Cowork accepts a task, works independently for 30 minutes to an hour, and returns with a finished output.

"More importantly, Cowork produces real deliverables rather than half-finished drafts: spreadsheets with working formulas, structured slide decks, formatted reports and usable artifacts that don't require extensive cleanup," Lohrli said. "In that sense, it behaves less like a chatbot and more like a junior colleague who can execute."

Cowork Adoption Is Limited ... for Now

Adoption remains limited. Madden noted most rank-and-file workers still rely on web-based interfaces for AI tools, which makes Cowork's availability only for Mac users with a paid subscription a hurdle. The combination of product limits, narrow availability and limited understanding of how these tools work means workers are largely not using Cowork in the short-term.

But Madden sees a longer-term pattern emerging. Workers are running a thousand little micro-experiments with tools like Cowork, figuring out what works and what doesn't, and building muscle memory for working alongside AI tools. The same thing happened with ChatGPT, which led to headlines about workers bringing their own AI tools to work through individual experimentation, not because IT told them to.

When AI Agents Equal 'Delegation With Anxiety'

The promise of autonomous agents hinges on a fundamental question about trust, Cisco principal engineer Nik Kale told Reworked. Autonomy saves time only when organizations are willing to accept a bounded loss of control. In narrow, well-scoped tasks, autonomous agents can reduce manual effort. But in many workplaces, the time saved on execution gets re-spent validating outcomes.

"Autonomy that requires constant supervision isn't autonomy. It's delegation with anxiety," Kale says. "This is the central tension in enterprise AI right now."

The challenge goes beyond simple oversight. Kale points out that the hardest part of autonomy isn't usage; it is responsibility. Non-technical users can operate these tools easily, but that ease masks something deeper: users become accountable for actions they didn't explicitly perform. That is a new cognitive and organizational burden, and most enterprises haven't designed workflows to support it yet.

It's the difference between convenience and responsibility, said Shanea Leven, CEO and co-founder at Empromptu. When an AI agent starts cleaning inboxes, moving files or modifying systems, the failure mode is no longer "the answer was wrong." The failure mode becomes "the system quietly changed reality."

"That's much harder to detect and much harder to undo even for the most experienced AI engineer, but it will be impossible for a vibe coder," Leven warned. Because these agents are good at routine tasks, people start to trust them with edge cases. Over time, humans stop checking. That is exactly when small errors compound into real operational damage.

Kale argues that productivity improvements only emerge when organizations redesign processes around delegation instead of bolting agents onto existing workflows:

  • Do you have clear policies about what the agent can touch?
  • Do workers understand what is in scope versus out of bounds?
  • Is there a recovery path when something goes wrong?

The answer to those questions determine whether an AI agent helps or harms.

The Security Vulnerabilities We Aren't Talking About

The autonomy challenge becomes even more urgent in light of security vulnerabilities.

Lohrli noted concerns around Cowork's security posture, specifically that within 48 hours of launch security researchers found Cowork was vulnerable to file exfiltration attacks via prompt injection. A malicious document can contain hidden instructions that trick Claude into uploading files to an attacker. The attack requires no user approval and exploits Anthropic's own allow-listed API. What's more concerning was this vulnerability was apparently reported months ago but not fixed before launch.

Lohrli also points to the risks of autonomous file access without adequate guardrails. Cowork deleted 11 gigabytes of files for a user while on camera during a first impressions video. Anthropic's guidance tells users to "monitor Claude for suspicious actions that may indicate prompt injection," but as Lohrli notes, that advice is unfair for people who don't know what a prompt injection looks like.

Madden said organizations aren't focused on this risk yet, instead they're still focusing on classic data loss scenarios, such as not pasting corporate secrets into ChatGPT. However, tools like Cowork operate at the worker's permission level. If a worker can delete files, share documents or modify production systems, Cowork can too.

Learning Opportunities

"The risk isn't that workers intentionally try to create havoc with AI tools, it's that they'll have Cowork run some workflows in the background while they're not entirely paying attention," Madden said.

IT departments have no infrastructure to govern this. Security tools that monitor everything a worker does work well when AI is the worker, but cause revolts when applied to humans. With Cowork, IT cannot tell the difference between human action and AI action, especially not with current approaches.

Kale argues that enterprises need to govern AI coworkers the same way they govern privileged users: identity, scoped permissions, action logging, escalation paths and kill-switches. At minimum create bounded action spaces, human-in-the-loop for consequential operations, comprehensive audit logging and graceful degradation when the agent hits uncertainty.

"The guardrail question isn't really about constraining AI. It's about making AI's boundaries visible and verifiable," Kale. "Autonomy without governance isn't innovation, it's unmanaged risk."

The Hard Conversations to Have Today

Madden thinks low adoption and the realities of its "research preview" status lower Cowork's current risk status. However, he does see it as another drip in the bucket that has been filling since ChatGPT's launch. Every few months, a new tool launches that is slightly better, slightly easier to use and slightly more capable.

"Someone needs to build the infrastructure layer that sits between workers' AI tools and enterprise systems: identity, access control, audit trails and all the boring plumbing that makes this work at scale," Madden said. "That's the conversation I want companies to be having, not whether Cowork will boost productivity."

Kale offers a similar assessment:"Claude Cowork is less a finished solution and more a rehearsal for the future of work. It is a credible early signal of where enterprise tools are heading, but most organizations aren't structurally ready for this level of autonomy yet."

Leven frames the long-term impact differently. In the short-term, tools like this make life easier for knowledge workers. In the long-term, they change expectations. When maintenance work disappears, output expectations rise. The same number of people are expected to do more, faster with less tolerance for error.

"The real disruption will not be mass layoffs overnight. It will be a redefinition of what competence looks like," Leven said. "Workers who know how to supervise, constrain and correct AI will thrive. Workers who assume the AI is always right will struggle. AI agents do not eliminate work. They eliminate excuses."

Editor's Note: Catch up on other news in the world of AI and autonomous agents:

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: adobe stock
Featured Research