a car dealer handing over the keys to a car
Feature

Enterprises Are Handing Over Authority and Calling It a Software Purchase

4 minute read
David Barry avatar
By
SAVED
Enterprises are vetting AI agents like software vendors. But an agent that can act across your systems isn't software. It's a delegation of authority.

Businesses have been using the same SaaS software procurement model for the last 10-15 years. The vetting centers around a core assumption: the software is built to respond to human inputs. 

The SaaS procurement playbook was never designed to handle AI agents. Yet enterprises are still applying it to a technology that doesn't behave like software. The difference between the two is where risk lives.

The Difference Between Software and AI Agent Procurement

Organizations are treating agent procurement as a software purchasing decision when it is, in practice, an authority design decision, said Insynergy founder and AI governance consultant Ryoji Morii

An AI agent connected to email, HR, CRM, finance and file systems does not behave like an application, Morii said. It becomes an operational participant that can retrieve, trigger, modify and execute across systems that most employees are never permitted to touch simultaneously.

"Most organizations vet agent vendors too narrowly, checking certifications and contractual protections while paying insufficient attention to how authority is being distributed through the agent's access model," Morii continued.

Morii frames the problem in three layers:

  1. Security due diligence — The baseline checks most organizations attempt.  
  2. Authority layer — Defines what decisions the agent can make, when human approval is needed and what triggers escalation.
  3. Accountability layer — The ability to reconstruct after the fact who authorized access, what scope was granted and under whose authority actions were taken.

The third layer is the most procurement processes overlook, said Morii. 

The Problem With AI Agents' Demand for Access

For an AI agent to be useful enough to deploy, it must be connected to multiple enterprise systems simultaneously, and each of those connections is a credential, a permission, an access point that tends to persist, explained Abstract COO and co-founder Chris Camacho, who co-authored research into the authority problem in multi-agent systems.

Organizations are handing out OAuth tokens and API keys to agent vendors the way they once handed VPN credentials to contractors, except this time there's no human on the other end making judgment calls.

The agent operates continuously, across multiple systems, with permissions scoped for convenience during onboarding and never revisited. "When you terminate a vendor relationship," Camacho said, "do those tokens get revoked across every connected system? In most organizations, the answer is 'we think so' rather than 'we verified it.' That gap between assumption and verification is where breaches live."

Regulators have identified the same problem. NIST's National Cybersecurity Center of Excellence argues agents should be treated as identifiable entities within enterprise identity systems, not as anonymous automation running under shared credentials.

A startup that has prioritized product velocity over security infrastructure won't have anyone monitoring credential hygiene across deployments, a tested incident response playbook or SOC 2 Type II certification, which requires sustained organizational discipline that growth-stage companies routinely defer.

At the heart of this is a category error. Enterprises evaluate AI agent vendors using frameworks designed for software that reads data and returns it. But an agent that can act is not software in that sense. It is closer to a staff member with a master key. The difference matters because staff can be supervised, questioned and dismissed. An agent operating under a vendor relationship that nobody has formally classified, across systems that nobody has fully mapped, under permissions that nobody has revisited, cannot be any of those things.

Cleric CEO and co-founder Shahram Anver, is clear about where the floor should be. "Any AI agent vendor without these basics," meaning SOC 2 Type II, a published trust center and independently verified penetration test results, "shouldn't be getting access to production systems."

Architecture matters too. Cleric's AI site reliability engineering agent is read-only by design. "That architectural constraint is more meaningful than any access token policy, because it limits the blast radius to zero even in a worst-case scenario," Anver said. A write-enabled agent that suffers a breach can modify records, send communications and trigger downstream actions across every connected system before anyone knows something has gone wrong.

Choosing Speed Over Oversight 

NIST's framework is still in the comment period. Production deployments are not.

Velocity has become the dominant value in procurement culture, said James Brundage, EY global and americas tech leader, drawing on data from EY's latest Technology Pulse Poll.

The research found 85% of technology leaders prioritize speed to market and iterative innovation, while only 15% insist on full pre-launch validation. More than half of department-level AI initiatives are operating without formal approval or oversight.

"Security concerns are no longer hypothetical; they're already materializing," Brundage explained. "In the past 12 months, 45% of technology executives report a confirmed or suspected sensitive data leak tied to unauthorized third-party generative AI use, and 39% report similar concerns around proprietary IP leakage."

Camacho describes the resulting pattern: The line-of-business owner brings in the tool, optimizing for speed. The security team finds out after the agent gains access. Only 29% of organizations reported feeling prepared to secure their agentic deployments in Cisco's State of AI Security 2026 report.

A separate Gravitee survey found that just 14% of agents went live with full security and IT approval. "That tells you everything about where the real decision-making is happening," said Camacho. Security maturity is not a feature of adoption. It is an obstacle. 

Responsibility Remains In House

The governance questions don't disappear when the agent goes live. They just become harder to answer.

Learning Opportunities

"When something goes wrong across six connected systems, the incident response isn't a technical problem first," Camacho said. "It's a governance problem. You're trying to figure out who authorized what, with which permissions, under which policy and the answers don't converge." He sees procurement processes where IT assumes the business owner vetted the vendor, the business owner assumes IT handled access controls and no one informed legal that the agent qualified as a third-party data processor.

The void works in favor of the vendors that skipped building the governance infrastructure to support that accountability. The uncomfortable truth is that the enterprise, not the vendor, bears the consequence when something goes wrong. The vendor loses a contract. The enterprise loses data, faces regulatory exposure, and spends months untangling which system was touched, when and by whom. That asymmetry is the real problem. 

Editor's Note: Catch up on other AI agent coverage:

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: adobe stock
Featured Research