fish swimming upstream, jumping out of the water
Feature

To Fix AI Governance, Stop Building It Backwards

6 minute read
David Barry avatar
By
SAVED
AI governance isn’t slowing companies down — building it backwards is. Speed comes from clean data, clear accountability and governance built in from day one.

Artificial intelligence (AI) governance is failing. Not because companies lack frameworks, but because they are building them backwards.

AI is no longer a future technology. It is reshaping how we work, decide and compete right now. Customer service runs on it. Code generation depends on it. Drug discovery is faster with it. AI systems are part of  daily operations across every sector. However, for organizations, one of the emerging issues about their use is how to govern these systems responsibly while maximizing their value.

The challenge cuts across disciplines. It is organizational, ethical and strategic. Companies race to deploy AI, but most lack clear frameworks for accountability, data integrity and long-term value creation. AI systems evolve faster than governance.The tension between innovation and control grows sharper every quarter.

Governance Isn't the Enemy of Speed

"Without high-quality, trusted data, AI is nothing more than a novelty," Anahita Tafvizi, chief data and analytics officer at Snowflake, said. "AI agents are only as good as the data and governance behind them."

This reality is driving a shift in organizational roles. Chief Digital Officers (CDOs) are evolving from back-office custodians into frontline operators, becoming the enterprise's AI COO. They build the foundation for AI, taking accountability for outputs, owning governance and proving business value.

The AI governance challenge comes down to timing and integration. The most successful approach balances speed with scrutiny: move fast where the risk is low, slow down where the consequences are high.

"Embedding governance into the workflow from the start rather than adding it afterward allows teams to innovate confidently,” said Richard Harbridge, Microsoft MVP and strategist at ShareGate. Automated compliance and security checks oversee operations in real time, creating guardrails that travel with the work itself.

"Contrary to popular opinion, governance isn't a rigid constraint," said Yakir Golan, CEO of cyber and AI risk quantification leader Kovrr. "It's what ensures GenAI systems can continuously deliver value without creating material liabilities."

Each time a new model enters the workflow, it should trigger a structured risk assessment. What safeguards are already in place? What still needs addressing? Where does the organization stand on control maturity?

Building transparency means grounding AI oversight in internationally recognized frameworks such as the NIST AI RMF or ISO 42001. A shared standard creates consistency and gives both internal stakeholders and external auditors a common reference point.

But frameworks alone aren't enough, Golan said. Every AI workflow should have a name beside it who is accountable for how it works, what data it touches and when it changes. Important automations should have a second name for escalation pathways and resilience. Pair this with automated drift detection and deep visibility into underlying systems.

Clean Data First, AI Second

Without a solid data foundation, AI integration creates more problems than it solves. Small oversights cascade. An employee's crunch-time search finds outdated information. Irrelevant content muddles a critical decision. When content systems are filled with stale information, AI amplifies the chaos rather than cutting through it.

Effective governance starts long before an AI rollout. It begins with clean, structured and well-governed content.

Karen Downs, senior strategic advisor for intranet at Staffbase, advocates for what she calls a "source of truth architecture." She spent years overhauling large-scale intranets where HR, IT and communications created siloed content that needed untangling.

The goal isn't forcing all content into one repository. It's defining the source of truth for each content type. Employees need to know which system holds the authoritative HR policy. Their AI assistants need to know where to find the quarterly sales report.

The practical directive: Build on what already works. Organizations should extend existing data governance, data loss prevention and lifecycle policies before introducing AI layers. Visibility becomes the best defense against shadow AI. Companies should always know who's building what, where it runs and what data it uses.

"Oversight by design always costs less than cleanup after the fact," Harbridge noted.

When each department manages its own implementation, oversight fragments and governance loses its power. Every new system should be discovered and connected to one dynamic inventory of AI assets. This inventory tracks where the tool operates, the data it interacts with and how safeguards are performing.

"This keeps risk management connected to reality," Golan said.  It lets leaders evaluate each integration against existing controls and quantify how each tool alters overall exposure.

Why Employees Resist AI

Technical foundations matter. But the human dimension consistently emerges as the most overlooked factor in successful AI integration.

Most organizations hesitate to engage in basic change management best practices, waiting for something definitive to happen first. Yet bringing AI into the workplace represents a shift bigger than the boldest past initiatives. It requires early engagement even amid the unknowns.

Everyone says AI needs to be an assistant to employees, rather than a replacement for them, Downs said. But when new assistants arrive in the workplace, skepticism follows, especially if leadership chose the assistant without employee input.

Learning Opportunities

The new assistant has to prove their value. The person being assisted has to be willing to share knowledge and train the assistant. There's always this risk that the new assistant will take over jobs. "Change management plans must account for this dynamic," Downs said.

Addressing these concerns requires creating space for unfiltered dialogue, forums where employees voice fears, share experiments and celebrate triumphs. 

Measuring sentiment through tools such as eNPS on AI helps organizations track progress. Equipping teams with tailored support such as ethical prompting workshops helps use AI better. The goal: make it a multiplier for engagement, not a divider.

Effective governance requires three principles, Downs said. 

  1. Clarity: Employees must understand what the AI system is doing and why it matters. 
  2. Consistency: Standardized data and messaging across platforms deliver reliable results. 
  3. Context: The AI system must be attuned to the organization's unique culture, language and needs.

The real value proposition shifts when viewed through this lens. AI's value is not in new features, but in freeing people to do the work they've wanted to do but couldn't. When organizations automate what drains time, such as permissions, migrations and reporting, individual gains help the team move forward faster, which improves the enterprise.

Measuring success requires looking beyond technical metrics. AI performance needs consistent evaluation anchored in a recognized risk management framework. But quantification should also translate safeguard maturity into measurable outcomes, connecting technical performance with human experience.

From Technologists to Translators

The leadership mindset itself must evolve. As AI adoption becomes required for business survival, those in AI COO positions balance innovation speed with foundational investments, design organizational structures for AI success and manage the enterprise's internal AI roadmap.

"Today's CDOs are evolving into the operators, integrators and accountability partners for businesses learning to run on AI," Tafvizi said.

The shift requires curiosity over credentials. Leaders do not need to code models. They need to question how AI shapes work.

The most effective leaders stay hands-on. They ask: What business decision does this influence? What data or permissions does it depend on? How will this change how teams collaborate or learn?

"They build trust not by having all the answers, but by modeling curiosity, transparency and a willingness to continuously learn alongside their teams," Harbridge explained.

This means leaders change from technology adopters to thoughtful translators, bridging AI's capabilities with the human elements of work. The most important skills blend digital literacy with empathy, communication and change management, creating safe spaces for employees to voice curiosity, concerns and ideas, which becomes shared learning and alignment.

Ongoing use of GenAI depends on communication and collaboration that bridge technical work with business decision-making. The benefits AI promises fade when governance is treated as an afterthought. Yet governance itself fails when it operates in isolation.

"When risk, technology and business stakeholders operate from the same understanding, AI becomes a governed enterprise capability rather than a fragmented experiment,” Golan said.

The AI Governance Advantage

AI governance is not about choosing between innovation and control. It is about building systems where both thrive, including strong data foundations, governance, change management and leadership willing to evolve alongside the technology. These don’t slow AI adoption down, but speed it up. 

When compliance runs as part of the runtime, not as a roadblock, governance becomes continuous rather than episodic. Organizations innovate faster and safer. Every AI workflow ships with its own audit trail.

Successful organizations will not be those that move fastest or slowest, but those that build governance into their AI systems from the beginning. And they will be the ones that remember that technology transformation is ultimately about people.

Editor's Note: Read more about the human and governance requirements for AI to succeed: 

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: Brandon | unsplash
Featured Research