Why Regulating AI Is Going to Be a Challenge
At the end of April, the US Department of Commerce announced the appointment of more than two dozen experts to a committee to advise the President on a whole range of issues related to artificial intelligence.
The formation of the National Artificial Intelligence Advisory Committee (NAIAC) marks the end of a long development process and the first move toward putting the AI Initiative Act of 2020 in effect. The creation of the NAIAC is a response to growing concerns about the development and use of artificial intelligence in both the business world and the world of private citizens.
AI, according to the National AI Initiative Office, is a new frontier and brings challenges to the economy and national security. The question is, will this approach to regulation work?
The Challenges of AI Regulation
The committee member list reads like a Who’s Who of the tech world and academia. Despite all the intellectual firepower, it remains to be seen whether AI can be regulated by committee — or even at all.
Seth Siegel, global head of AI at Richardson, Texas-based Infosys, said it will be difficult. At the current pace of innovation, technology development is significantly outstripping human capabilities. While data scientists could design algorithms capable of monitoring and checking other AI, this creates more complexity. These issues are compounded by the "black box" problem, with some machine-learning algorithms becoming so complex that most don't understand how they work and only see inputs and outputs.
“Increased regulation will bring valid concerns around the impact on innovation, from both regulators and the companies at the forefront of cutting-edge development,” Siegel said.
Much of this is understandable. Businesses will need to respond to regulations with new processes and tools to remain compliant in their industries, as well as offering guidance on best practice and emerging risks to employees.
However, these kinds of initiatives are expensive, complex and resource-intensive. There is also the practical consideration of developing appropriate penalties for those who fail to comply with regulations. Additionally, with so many organizations looking to third-party providers to develop and manage their AI, many technology leaders may have little to no insight into their algorithms and how they work together as part of a larger system.
“As a result, I expect to see many turning to external experts," Siegel said. "Consultants of the future will be expected to lead their clients on delivering AI responsibly and ethically in line with any new regulations.”
Related Article: What to Know About Regulation of AI at Work
Start Regulation at the Application Level
AI can be regulated but it has to start at the application level, said Kavita Ganesan, author and founder of Sandy, Utah-based Opinosis Analytics. In her view, this is because AI systems by themselves are not conscious beings. It is through human applications that individuals and organizations are going to start misusing AI.
“We have seen this play out quite recently with the use of tools like deepfakes to portray state leaders as saying things that they did not," she said. "It is not the work of AI. It is the work of humans in how they are using AI."
Ganesan said getting too deep in the weeds at the algorithmic level will quickly become unwieldy. For application-level regulations around AI to work, organizations will need to be specific, provide exclusions and be use-case driven. It's also critical that regulators look at the quality of the data being used to feed machine learning algorithms.
"AI regulations should also touch on data as AI systems and related algorithms rely on data," she said. "What types of specific data points can absolutely not be used for the development of intelligent solutions or to perform any kind of targeting with commercial intent?"
Related Article: Why Ethical AI Won't Catch on Anytime Soon
AI's Bias Problem
There is another problem, too. Self-learning AI technology is, by definition, always evolving. It is built to learn from its own experiences and mistakes — to grow in its knowledge and improve itself, said Eric McGee, senior network engineer at Spring, Texas-based TRG Datacenters. This is what makes it so powerful, but it is also what can make it so difficult to regulate.
“By the time regulators catch up to what a given piece of AI technology is capable of, it has already changed to improve itself,” he said.
Even more problematic with AI is that as it learns, it also teaches. When it is taught something, it doesn't just form an opinion based on the information it is given. It finds opportunities to apply that information to real-world situations. If the source information is incorrect or biased, that bias will be reinforced and amplify over time.
Learning Opportunities
In addition to reinforcing biases, self-learning AI tends to create new biases that were not present before it started learning. This is because of the sheer amount of data available today that allows AI to think on a much larger scale than could have been predicted just 10 years ago.
“The more data you have, the more likely you will draw incorrect conclusions about something,” McGee said.
Related Article: AI at Work Still a Work in Progress
Data Governance as a Regulation Solution
Some think AI regulation need not be as difficult as some make it out. According to Sharad Varshney, CEO of data governance consultancy OvalEdge, AI regulation is not only possible, it could even be easy given the right circumstances.
Any company developing AI needs to ensure that data governance plays an integral part in the AI technology development process, Varshney said. Developers that don't want an AI algorithm to come to a damaging conclusion, like making a false credit assessment using outdated historical data, will have to govern the information AI has at its disposal.
In Varshney's view, regulation is only on the table because companies have become lax in self-regulating, which has had disastrous consequences. He argued that effective data governance processes enable users to optimize, separate, define and organize data to make it easy to regulate the stream of information an AI tool receives.
Beyond this, data governance can also enhance the capabilities of AI technology because the same processes that underpin safeguarding measures also ensure data quality and availability. It is possible to optimize the results of an AI algorithm when the very best data is provided in real-time, according to Varshney.
Related Article: Why Congress Fails to Regulate Big Tech
Opportunity Ahead for Competitive Advantage
Regulating AI may be a challenge but it is necessary. Several high-profile cases show just how unreliable AI can be, with algorithms making decisions that amplify human bias and even put people's lives and livelihoods at risk.
In 2018, tech giant Amazon discovered the company's AI-fueled recruiting engine downgraded the resumes of women due to a bias in the training dataset. A recent study published in Science showed that risk-prediction tools used in health care in the United States exhibit significant racial bias. In both instances, regulation would have helped to prevent discrimination, security risks and inequality.
That said, there may be opportunities for forward-thinking companies to use regulations as a competitive advantage.
“This means that businesses need to take the chance to develop AI algorithms with transparency and accountability," said Siegel. "They will be in a better position to win the trust of consumers and regulators moving forward."
About the Author
Mike Prokopeak is editor in chief at Reworked, the premier publication covering the r/evolution of work, where he leads content development focused on the transformation of the workplace.