circle shaped group of ropes creating a connected networking horizontal composition
Editorial

Interoperability Is Coming. Trust Will Decide How Fast.

4 minute read
Sanjay Rakshit avatar
By
SAVED
Agentic AI promises enterprise scale, but trust remains the missing layer between isolated tools and interoperable systems.

With the rapid proliferation of AI platforms, it’s easy to assume that only the best will ultimately win. But the future isn’t one large platform doing everything. It’s many finely tuned systems, each doing what they’re good at, working together.

Interoperability is where AI is heading. And it’s already underway. Gartner predicts that 40% of enterprise applications will be integrated with task-specific AI agents by the end of 2026.

In an enterprise setting, interoperability isn’t a new idea. We’ve seen this before with SaaS, where APIs allowed different platforms to contribute to a larger outcome. That accelerated adoption and changed the landscape. For agentic AI to succeed, it must do the same.

But there’s an unmistakable gap between where we are today and that future: trust.

Table of Contents

Minding the Trust Gap

Interoperability sounds straightforward. Connect systems with shared context, and you get coordinated actions. In reality, it requires something much harder. Systems need to act on each other’s behalf. That means humans giving up control.

As we’ve explored the psychology of delegation, we’ve grown comfortable asking AI for help. But letting it take responsibility is a different step, especially when most agentic systems still rely on chatbots or automation, rather than true agents.

So, we hesitate. This helps explain the oft-cited estimate from McKinsey that fewer than 10% of organizations have scaled AI in ways that deliver meaningful business impact.

And that hesitation is exactly what’s standing between us and interoperability.

A Lesson From Aviation

In the early days of commercial aviation, flying made very little sense. There was no precedent. Humans weren’t meant to be in the air. And importantly, it wasn’t solving an urgent problem. Long-distance travel already existed. It was slower, but predictable. A journey could be a family outing with stops along the way.

Now imagine someone coming along and saying, “Give that up. I’ll put you in a metal box, thousands of feet in the air, moving at speeds you can’t comprehend. It’s safe, and you’ll get there faster.” It sounds irrational.

Yet today, we do exactly that without hesitation. We board the plane, not knowing the pilot and having zero influence on the process. And interestingly, for many people, that’s when they do their best thinking. Control is relinquished, and focus follows.

How did it happen? Airlines didn’t try to convince everyone at once. They focused on business leaders with multiple locations and limited time. Instead of asking for blind trust, the question was, “What if you could be in five places in a week instead of one?”

That’s when it clicked. The value outweighed the risk. And over time, something else happened. People didn’t just learn to trust planes. They learned to trust airlines. The provider mattered more than the technology.

Related Article: Designing for Trust: Why Responsible AI Is Now an Enterprise Imperative

The Same Shift Is Happening in AI

Agentic systems are not an incremental improvement over chatbots. Chatbots respond. Agentic systems take responsibility. Comparing the two is like comparing a smartphone to a pager.

The biggest difference is delegation. With agentic AI, we’re asking systems to own outcomes. To think, plan and act without us being involved at every step. Interoperability becomes more than data integration.

That requires a level of trust we haven’t fully built yet. So, we fall back on what we know. We test in controlled environments. We keep a tight grip on control.

But history shows a consistent pattern. When we leap beyond imagination, capability expands.

Air travel didn’t replace cars. It made them better. Advances in aerodynamics, materials and engineering flowed into automotive and other industries. Cars became faster, safer and more efficient.

I believe the same pattern will play out here. Agentic AI will expand human capability. When you remove the need to manage every step, you create space to focus on higher-value work.

The irony is the very thing that makes agentic AI uncomfortable — giving up control — is also what unlocks its value.

How Trust Gets Built

Trust in agentic AI is a design choice. It comes from predictable systems, strong governance and adherence to standards like ISO 42001. You don’t eliminate the risk of delegation. You contain the blast radius if something goes wrong.

Learning Opportunities

Visibility and auditability are the foundation. These are decision-making systems, and leaders need to understand how decisions are made and why actions are taken. This is not the place for black boxes. It’s really no different from how we develop trust with colleagues.

Trust also builds in stages. We can start by deploying AI agents alongside systems we’re already comfortable with. Over time, that confidence extends to agentic systems working with each other.

This requires systems that are built for enterprise use, not demonstrations. Systems that operate within clear boundaries, without relying on endless prompts and uncontrolled token consumption. Systems where humans remain in the loop where it matters, not everywhere.

And importantly, it means trusting the provider behind the system. Just as we choose airlines based on reliability, not the aircraft model they fly.

Related Article: AI Governance Isn’t Slowing You Down — It’s How You Win

From Trust to Interoperability

Interoperability is the destination. But before systems can work together, they need to be trusted to make real decisions.

Without trust, everything stays siloed. Humans stay in control of every step. Scale never happens. With trust, systems can delegate to each other. Work happens faster and outcomes compound.

This shift requires executive leadership. Not by mandating adoption but by demonstrating it. Taking the first step, building trust in controlled ways, and bringing the organization along.

We didn’t embrace air travel because planes improved. We adopted it because we decided who we could trust. The same decision is in front of us again. Are we ready to make that leap?

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Sanjay Rakshit

Sanjay Rakshit is the VP of AI and Analytics at Poppulo, leading a global team driving a GenAI-first strategy across communications, digital signage, and workplace solutions. Having started in AI during the “AI Winter,” he has spent over 20 years scaling deep tech companies in fintech, speech, and GenAI, creating products that solve customer problems, deliver investor value, and achieve transformative growth. Connect with Sanjay Rakshit:

Main image: freshidea | Adobe Stock
Featured Research