A meeting among government officials
Feature

Inside the AI Accountability Gap: Why Enterprises Are Building Their Own Rules

3 minute read
Phil Britt avatar
By
SAVED
The AI accountability gap is widening. Here's how enterprises are stepping in as lawmakers struggle to keep pace.

Organizations are rapidly advancing AI initiatives and maturing their AI governance due to concerns about accountability and the risk of regulation if industry governance is found wanting. 

The issue, according to the Carnegie Council, is that AI systems are often criticized for being a "black box," meaning how an output is achieved cannot be fully explained or interpreted by its users. If AI decision-making cannot be explained or understood, assigning responsibility and holding parties accountable for harmful outputs becomes very difficult.

The Accountability Crisis at the Heart of AI Deployment

A strong AI accountability framework, according to the Center for American Progress, "must empower the US government to create and enforce new rules around AI, such as to designate high-risk cases and sectors that in some cases should go through a government review before deployment, prevent national security threats, outright prohibit certain dangerous uses and establish broad principle-based rules to ensure safe and effective systems and prevent algorithmic discrimination." 

As one Forrester report points out, AI, and specifically genAI, can create new risks and exacerbate existing ones. Enterprise and societal concerns around biased decisioning, infringement of workers’ rights, privacy, safety, IP leakage and copyright violations are all coming under legislative and legal scrutiny.

However, according to Gartner analyst Lauren Kornutick, only 45% of organizations report having achieved advanced maturity of AI governance, which aligns AI policy with the AI governance operating model — information that came out during the company's "AI Mandates for the Enterprise" survey. 

“Building trustworthy AI frameworks is essential for enterprises to effectively manage the risks associated with AI technologies and demonstrate appropriate management oversight,” Kornutick said. “To do this, organizations need to ground themselves in responsible AI principles. We cannot rely solely on regulations to govern AI use — global standards for responsible use of technology can vary widely and even conflict by jurisdiction.”

Related Article: AI Regulation in the US: A State-by-State Guide

Companies Can’t Wait for Washington to Act on AI 

In the absence of legislation, or in cases of diverging regulatory regimes, developing and adhering to one’s own principles on the responsible use of technology provides focus, and, as a result, faster technological advancements, said Kornutick.

For example, the US Department of Health and Human Services (HHS) finalized a rule for Health IT with a provision for algorithmic transparency by creating industry standards to assess fairness, appropriateness, validity, effectiveness and safety.

In another January 2024 case, NYDFS issued proposed guidance (a de facto requirement) on AI use that applies to insurers operating in NY. “This guidance is even more far reaching than federal proposals and identifies requirements for qualitative and quantitative risk assessment, the roles of executives and boards of directors and governance levels commensurate with the entity’s risk appetite,” the Forrester report cited. 

Forrester recommended that enterprises keep a watchful eye on similar requirements from their industries as well as those that impact customers and third-party partners. The challenge is how to navigate the enactment and enforcement of existing laws and be prepared for emerging AI laws, all while dealing with regulatory uncertainty.

The Anatomy of an Effective AI Governance Program

AI governance programs should monitor adherence to principles that align with the company’s governance practices in other areas, Kornutick noted.

Learning Opportunities

Gartner advocates for a three-dimensional approach to AI controls, covering risk, value and decision complexity. Focusing too much on one dimension will lead to underestimating the risk or overcorrecting at the expense of the enterprise’s ability to build AI capabilities and literacy for long-term success.

AI governance should be its own separate initiative, but can mirror a functioning governance operating model (such as IT governance or data governance models). For AI, start with AI-specific policies and controls.

Kornutick recommended enterprises include the following elements in their governance programs:   

  • Note whether AI-specific policies and controls in place and how these specific policies and controls tie into established organization frameworks.
  • Create a central repository of all AI use and assign risk scores to triage how the AI will be monitored.
  • Develop and enact AI ethics frameworks, procedures and processes.
  • Continuously follow AI ethics procedures by creating a case-by-case process to discuss ethics through the lens of ethical dilemmas.
  • Require vendors with AI components to provide verifiable attestations of expected AI behavior and trace the lineage of data to vendors on an ongoing basis.
  • Continuously monitor high-risk AI systems, agents and use cases, and periodically audit all other AI systems, agents and use cases.
  • Uniformly apply privacy program requirements across all AI activities.
  • Consider deploying AI TRiSM (trust, risk and security management) technology systems, such as those that can perform runtime inspection and enforcement for high-risk systems.

“Trustworthy AI frameworks are only as good as whether organizations are adhering to them,” Kornutick said. “Organizations need to enforce acceptable use of AI policy violations consistently and incentivize responsible use of AI.”

About the Author
Phil Britt

Phil Britt is a veteran journalist who has spent the last 40 years working with newspapers, magazines and websites covering marketing, business, technology, financial services and a variety of other topics. He has operated his own editorial services firm, S&P Enterprises, Inc., since the end of 1993. He is a 1978 graduate of Purdue University with a degree in Mass Communications. Connect with Phil Britt:

Main image: U.S. Department of State on Wikimedia Commons
Featured Research