Microsoft dropped hundreds of new features this spring across Dynamics 365 and Power Platform. Calling it a leap into agentic AI, the company added Copilot agents to sales, finance, HR and supply chain and scrapped its bi-annual release schedule in favor of continuous updates.
That's a lot of verbs for one release wave, but the centerpiece is an architectural change: systems that don't assist workers so much as act on their behalf by completing tasks, triggering approvals and making decisions autonomously across enterprise data.
New additions include:
- Agentic Center of Enablement, a governance layer designed to find action plans and allow administrator review before execution.
- Copilot agent for Dynamics 365 Sales, which autonomously researches prospects, drafts outreach and updates CRM records without human prompting.
- Expanded low-code tools that Microsoft said put agent-building in the hands of everyday business users.
But there’s a gap between what Microsoft is shipping and what most organizations are ready to run.
Tech Vendors Can’t Sell AI Infrastructure
Every major enterprise technology wave arrives with the same implicit assumption: Organizations are prepared to absorb it. They rarely are.
With Wave 1, the constraint isn't the feature set, but the organizational infrastructure underneath it. Microsoft can't ship that.
"The bottleneck is not the availability of AI features," said Mahmoud Ramin, research director at Info-Tech Research Group and a specialist in enterprise AI governance. "It is organizational readiness in terms of governance maturity, data quality, infrastructure, process standardization and internal skillsets."
Microsoft's Agentic Center of Enablement exists, in part, to slow down agents before they cause damage. But an AI governance feature and a governance culture are different things.
"No organization is ready to absorb everything at once, nor should it try," said Daniel Burrus, a technology futurist and business strategist who advises organizations on anticipatory leadership. "The winners will separate what is certain and has the highest impact, and then phase adoption in a way that fits their readiness."
For most enterprises, that selective restraint is harder to maintain than it sounds. "The conversation is no longer about digesting release notes,” Ramin said. “Organizations should decide which agents are worth investing in, which should stay in the sandbox and which legal requirements need review." That's a product management function most digital workplace teams aren't staffed or structured to perform at this pace.
When the AI Agent Gets It Wrong
When an agent rejects a supplier invoice, flags a hiring candidate or triggers a financial approval autonomously, who owns that outcome?
It sounds like a philosophical question, but it's an operational one, and most enterprises have no answer ready.
"Most frameworks today track who accessed what," said Brian Behe, CTO of RIIG Technology, who built machine learning systems for the NSA and U.S. Cyber Command as a former director of AI at CyberPoint. "They do not track who decided what. When an agent makes a consequential decision inside a business process, that gap shows up quickly." The governance infrastructure most organizations rely on was designed for humans making decisions through software, not software making decisions instead of humans.
Consequences are predictable for that mismatch. "We have built systems that act faster than any human can intervene, then act surprised when no one knows who is responsible," Behe said. "The answer is not to slow the technology; it is to build accountability into the architecture from day one."
Behe identifies three prerequisites before any organization deploys Wave 1 agents at scale:
- Decision traceability
- Ownership
- Risk-based oversight
Without all three, there is no meaningful way to understand what an agent decided, why it decided it or what to do when it was wrong.
"Autonomous agents can accelerate action, but humans must define the rules, the oversight and the consequences before automation takes over,” Burrus said. Most organizations have not done that work.
AI Amplifies the Problems
Wave 1 is architecturally dependent on a unified Dataverse foundation. The premise is that enterprise data is sufficiently clean, connected and consistent to support autonomous decision-making across functions. In most organizations, that isn’t true.
"A unified data foundation sounds clean in a product announcement," Behe said. "In reality, most organizations are dealing with fragmented and inconsistent data across multiple systems. Wave 1 does not solve that problem. It exposes it."
Agentic AI doesn't improve bad data, but acts on it, at scale, faster than humans intervene.
"AI amplifies whatever foundation you give it," Behe said. "If the data is strong, you get leverage; if it is not, you get confusion at scale."
"Bad data does not become smart because AI touches it,” Burrus agreed. “The inputs must be fixed first."
What does that look like in practice? "Across our production deployments, the realized gain typically clusters around one narrow workflow per company," said Victor Smushkevich, founder of Call Setter AI, who has run production AI deployments for enterprise clients across industries. "Companies that try to fan out agents across many workflows before proving one tend to see worse outcomes, not better."
That is likely to hold for Wave 1 as well: Early wins will concentrate in a handful of teams. Broad transformation, if it arrives, comes later.
Transformation Is an Operating Expense
Fixing fragmented data is expensive, while building governance infrastructure costs time, expertise and internal capacity that most teams don't have to spare. And almost everything Microsoft is promising in Wave 1 runs through a Copilot license priced at $30 per user per month, layered on top of existing Microsoft 365 subscriptions. At a 1,000-person organization, that's a list price of $360,000 a year before the governance work begins.
"The moment it becomes unaffordable is when the cost of licenses rises faster than the value of the output,” Burrus said. “If AI is not measurably freeing people to do higher-value work, the transformation is not scaling; it is just getting more expensive." CFOs are already running that calculation, and the numbers are rarely as clean as the product announcements suggest.
Moreover, the costs of the organizational work required to evaluate, sequence and govern agents carries its own price in time, expertise and internal capacity. It’s all incurred before a functioning workflow, and rarely makes it into budget conversations, Ramin said.
"The governance lift is the part nobody budgets for,” Smushkevich agreed. “Every agent that acts on a worker's behalf needs an explicit rollback, a confirmation threshold and a scope limit per action. That work is not in the feature list, and it is what separates what gets announced from what actually runs in production a year later."
Microsoft's 2026 Release Wave 1 is a consequential change in enterprise software. The agentic direction is right, and the features, by themselves, are impressive.
But features don't transform organizations. Organizations transform organizations. Wave 1 delivers the conditions under which transformation becomes theoretically possible, provided the data is clean, the governance is built, the accountability is defined and the budget holds.
Those are not small conditions, and they are the customer's problem to solve.
Editor's Note: The AI bill is quickly adding up. Read on for other perspectives:
- The AI Bill Is Coming Due: Why Enterprise Pricing Will Never Be the Same — Microsoft’s Microsoft 365 price increases mark the end of subsidized AI and the beginning of software pricing driven by real, recurring infrastructure costs.
- Google Starts Charging for Previously Free AI Features — Google calls it democratization. The fine print tells a different story — as previously free AI features move behind a paywall, the access gap widens.
- When the AI Agent Runs Wild, Who Pays the Bill? — AI agents are making spending decisions your finance team never approved. The answer isn't better dashboards.