AI Governance Is a Challenge That Can't Be Ignored
The rapid pace of AI adoption in business could be heading for some major speed bumps.
According to Gartner experts presenting at the Gartner CFO & Finance Executive Conference in June 2022, half of all AI deployments are expected to be postponed between now and 2024, as companies face barriers to upscaling AI in-house.
AI governance and how enterprises are going to monitor and control the use of data in their AI platforms are emerging as significant snags. AI governance is a relatively new concept, as AI itself is still only in the early stages of development, but there are already complications emerging.
For some companies, the governance of AI applications is included in data or model governance structures. In these instances, AI governance is incorporated into the corporate fabric. But there are cases where the governance structure is elaborate. Such structures are usually built to manage a portfolio of AI applications or implementations within the company and are especially favored by companies with a cohesive, comprehensive strategy for how AI and other advanced technologies are being incorporated into their operations.
“Whether basic or elaborate, an established, transparent structure of AI governance embedded within the company culture is far more likely to achieve the desired objectives than simply developing AI algorithms,” said Theresa Kushner, AI and analytics consultant at NTT DATA.
The Challenges With AI Governance
There are several reasons why governing AI is particularly challenging. According to Kushner, the fact that few people in organizations today really understand what AI is — and isn't — is the first hurdle. Adding to that complication, the emphasis has so far been on developing AI algorithms, not deploying them. Governance will need to be applied to the entire process from development to deployment.
It also becomes increasingly difficult to make decisions about how AI is going to be governed as it continues to be integrated into real-life applications. Questions remain about how, when and where it can be used.
“When AI enters our lives, a control framework is needed, especially when AI is being used to make decisions that impact our lives," said Mikaela Pisani, data scientist at Los Angeles-based Rootstrap. “Issues regarding data privacy and justice should be a continual discussion, especially with control and access to personal data."
The best way to address the challenge is to develop a framework that contains explainability standards and fairness principles and addresses ethical challenges regarding bias in algorithms and privacy concerns. But even this idea presents challenges, said Sharad Varshney, CEO of Atlanta-based data governance consultancy OvalEdge.
"They must be safe, ethical and efficient," Varshney said. "However, organizations must deploy specific governance protocols to achieve these outcomes, but the road to AI governance isn't easy."
Ultimately, AI technologies must address three fundamental standards:
- Explainability: It is often difficult to know why an AI does what it does, so regulating it and auditing it in the event of an infraction may be difficult.
- Non-directed behavior: AI doesn't need explicit instructions and can develop its own behavior, which makes it difficult, if not impossible, to delineate regulations that would cover all potential AI behaviors.
- Emergent behavior: Non-directed behaviors also mean that AI often behaves in unexpected ways, which can create unanticipated problems.
Related Article: Why Ethical AI Won't Catch On Anytime Soon
Buy or Build Question Complicates AI Governance
If those challenges weren't enough, there are additional considerations for effective AI governance. From a technological perspective, there are two possible situations, each with its own set of limitations.
In one situation, organizations use third-party technology but, as a result, may not be able to access critical source code elements to make effective changes or are in in the dark about how the technology operates in specific environments. In the second, companies develop in-house solutions which allow them to effectively traverse the issue of knowledge and access, but are faced with expensive development and maintenance costs.
There's also the issue of human resources and customer relations. In addition to issues related to training and literacy for staff, organizations must also account for the wider public. Customer-facing organizations need to offer users the chance to opt out of specific AI processes. Configuring this degree of fluidity in an architecture can be extremely challenging.
Learning Opportunities
"There is no one-size-fits-all approach to overcoming these challenges," Varshney said. "However, from a data governance perspective, many of the issues that arise from implementing AI tools can be managed efficiently with an automated data governance tool."
Related Article: The Impact of Privacy Regulations on Digital Workplace Technology
Working Within Regulations
One other major issue with AI governance occurs at scale. Organizations must be able to mitigate risk across territorial boundaries. This, again, comes down to the framework.
Working with AI often means working with large datasets, including the personal information of data subjects from different geographic locations. Each jurisdiction has its own regulations concerning the use and governance of AI or machine learning tools. Some may not even have regulations or best practices stipulated by their legislature. Companies must learn to navigate those as well because they are bound to evolve rapidly with increasing AI uses.
There are two main approaches at the moment: The EU has proposed a risk-based approach to AI governance, while the National Institute of Standards and Technology (NIST) in the US recently presented an initial draft of an AI Risk Management Framework. There is still significant work needed to harmonize the two frameworks into a model that can be adapted for use in both Europe and the US simultaneously, as well as other territories.
Related Article: Why Regulating AI Is Going to Be a Challenge
According to Shane Tierney, GRC analyst and data protection officer at San Mateo, Calif.-based SetSail, the EU proposal has already been legally enforced, with considerable power built into the language, including transparency requirements for certain use-cases of AI (such as chatbots and deepfakes) and the right to order that AI models be destroyed or retrained.
By comparison, the NIST framework is voluntary — a suggested list of best practices — and neither NIST nor the US federal government has the power to enforce its recommendations for governing AI.
“If we look at the gradual and increased adoption of data protection legislation in other markets since the enforceability of GDPR, it’s quite likely that Europe and Asia will take the regulatory lead on AI governance issues," Tierney said. “If you’re waiting for the US Congress to act, remember that we’re still waiting for federal data privacy laws in the US.”
For US-based companies to compete in EMEA and APAC, they will need to pay particular attention to regulatory developments in those markets since, as with data protection regulations, it is likely that any AI regulations will have extraterritorial applicability and enforceability.