a person playing the shell game
News Analysis

Can Anthropic Redefine Workplace Productivity Without Microsoft Teams?

6 minute read
David Barry avatar
By
SAVED
Anthropic launched MCP Apps with nine partners and big promises, but the enterprises it needs to convince still run on Microsoft infrastructure.

Shahram Anver spent three years deploying self-learning agents in live production systems, where mistakes cause downtime rather than just failed demos. That experience has helped him see the gap between what enterprise AI promises and what it delivers.  

When Anthropic launched MCP Apps on January 26, embedding interactive workplace applications inside Claude's chat interface, Anver wasn’t skeptical about the technology, but whether the organizations being asked to adopt it were ready.

MCP Apps lets users perform tasks such as draft a Slack message and post it without leaving the conversation, reassign an Asana task, update a timeline or build a Figma diagram from a text prompt. The pitch is that Claude eliminates the productivity tax of switching between AI and the tools your team uses.

Anthropic is not just selling a productivity tool, but arguing that the future of work runs through applications such as Slack, Asana and Figma rather than through Microsoft Teams and SharePoint, and that the companies doing the most valuable work have already moved there.

Nine launch partners span the core categories of knowledge work: Asana and monday.com for project management, Box for file access, Figma and Canva for design, Amplitude and Hex for analytics, Clay for sales intelligence and Slack for internal communications, with Salesforce coming soon. Microsoft Teams is not on the list.

Why Anthropic Is Skipping Teams 

That absence is not incidental. Teams is the collaboration infrastructure of mainstream enterprise, used in procurement agreements, compliance frameworks and IT architectures across most large organizations. Launching a workplace AI integration strategy without it is either a calculated decision or a blind spot, and people who build production AI systems for a living aren’t sure which this is. 

As chief executive of Cleric, Anver reads it as strategy. At Cleric, he watches engineering and product teams use Slack, Asana and Figma to move fast, and sees Teams used primarily to push information downward rather than get work done. "Anthropic has identified the future of work as the space that creates the most value, not where the most individuals exist,” he said.

Gidi Adlersberg isn’t convinced. He's the Voca CIC business line manager at AudioCodes, a platform built around Teams integration. While Adlersberg doesn't dispute Claude's technical momentum, he does question whether integration lists move enterprise procurement. "OpenAI has a structural advantage through its deep alignment with the Microsoft ecosystem, particularly Azure OpenAI and Microsoft Teams, which is where mainstream enterprises already live," he said.

What Adlersberg finds telling is that Microsoft offers Anthropic models through Azure, suggesting that the competitive picture is more fluid than either company's positioning implies. He believes the integration gap is temporary. The harder question is whether Anthropic builds the ecosystem alignment with enterprise procurement and cloud frameworks that OpenAI spent years constructing inside Microsoft's infrastructure.

Governance Is the Deployment Blocker

A bigger problem is the substantial gap between what MCP Apps do and what most organizations are ready to allow it to do — and this is a governance gap rather than a technical one.

Integration is not the problem, said Nik Kale, principal engineer at Cisco Systems. "The technical integration is not the barrier,” he said. “Connecting AI to Slack or Asana is a solved problem. The governance integration is what stops deployment cold.”

Most enterprises cannot yet answer the questions that compliance and legal teams will ask before any of this reaches production. When an AI agent drafts a Slack message on your behalf, who owns that communication? When it modifies an Asana task, does the change satisfy your SOX audit obligations? When it pulls a file from Box, does your data classification framework extend to AI-mediated access, or does it only account for humans?

Every connected application is a new trust boundary that somebody has to design, not just configure. Most organizations have not started that work.

Finance and compliance teams recognize the risk, Kale said. Launching integrations before governance frameworks exist creates what he calls governance debt: the same structural problem as technical debt, except the interest accumulates in compliance exposure rather than in code. "The interest payments come due during your next audit or your next incident, whichever arrives first."

Organizations that move from pilot to production are not the ones with the most integrations active, but the ones that established trust boundaries, audit frameworks and human approval checkpoints before they connected the first tool, Kale said.

Three practical barriers will slow deployment in the near term regardless of how good the technology is, Adlersberg said. 

  1. Data governance: Touching data held in Slack, Asana or Box means navigating ownership across multiple teams, each requiring signoff from security and compliance, and that process does not move quickly.
  2. Cost unpredictability: AI integrations are not deterministic, and finance teams cannot approve budgets for something they cannot forecast.
  3. Desynced organizational AI motion: Multiple teams run parallel AI experiments, sometimes on identical problems, without coordination, producing duplicated effort and internal friction rather than progress.

Who Is Responsible for Agent Activity?

An agent with access to Box, Slack and Asana reads your files, composes and sends your messages and modifies your project timelines. Each of those capabilities is bounded within its own platform. Combined, they create something no one platform's permission model was designed to handle.

Three conditions are required before this goes near production, Kale said.

  1. Identity binding: Every action the agent takes must be traceable to a specific human principal, operating under the same access controls that person would have. If an engineer doesn’t have access to a particular Box folder, the agent must not have access to it through a different permission path.
  2. Action auditability: Modifications across every connected tool must land in one unified log that security teams can query, rather than being scattered across the native logging of each platform.
  3. Scope containment: Agents must operate within a defined working context, not hold broad permissions across a tool. "An agent that can read your Box files, draft your Slack messages and modify your Asana tasks has created a cross-tool blast radius that your security architecture probably was not designed to contain,” Kale said.

Where Kale focuses on architecture, Anver approaches the same problem from the user trust angle, and his conditions are also specific. 

  • Every agent action must be staged for human review before it executes, with the system showing not just what it wants to do but why, in plain language. 
  • Logging must capture the full reasoning behind each decision, not merely the API call.
  • The audit trail must belong to the person who initiated the interaction, not to IT or to management.

"Real trust comes when engineers can see exactly what an agent saw, decided, and tried to do, and can fix it right away," Anver said.  "Without this, you're not really deploying AI. You're just adding a black box you can erase."

The ROI Question Enterprises Cannot Yet Answer

"Don't fall into the ROI trap," Adlersberg cautioned. "We've seen the largest technology companies in the world make bold ROI claims around AI that later turned out to be nearly impossible to back up."

Evidence that builds credibility has nothing to do with projected efficiency gains or board deck forecasts, but from picking one team and one use case, deploying it, measuring what happened and doing it again, until a pattern emerges.

Learning Opportunities

That is a cautious, methodical approach. It is also the opposite of what a nine-partner launch with a Salesforce announcement in the pipeline looks like.

Anthropic is moving fast and making large claims, which is its prerogative. But the organizations it needs to convince are the ones still untangling last year's AI experiments, still waiting for their compliance teams to write the first policy on AI-mediated actions and still running their businesses on the Microsoft infrastructure Anthropic has chosen not to prioritize. The gap between where Anthropic thinks work is going and where most enterprises are is the real story here, and MCP Apps is not currently closing it.

Editor's Note: Catch up on more happenings in the world of AI agents:

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: adobe stock
Featured Research