Why Responsible AI Should Be on the Agenda of Every Enterprise
Earlier this month, the Responsible AI Institute (RAII) announced the appointment of Seth Dobrin as its president. For anyone unfamiliar with Dobrin, he was IBM's first-ever global AI officer. Before he left the company, he led the charge for the implementation of human-centric responsible and trustworthy AI.
At the RAII, Dobrin will help drive the nonprofit's mission to support corporations, governments and suppliers fast track their responsible AI strategies through independent and accredited AI conformity assessments and responsible AI leadership certification.
The fact that these organizations exist and manage to enlist the help of such big names as Dobrin points to the fact that there is a problem with AI today and that it needs to be dealt with urgently. And we're not just talking about technical problems; it is much bigger than that.
The problem is existential. It poses the question whether AI can be created to be responsible or if it's essentially a black box technology that can be certified and monitored for accountability.
Public Expectations of AI
As the AI industry has matured, more attention has been placed on responsible AI as a necessary expectation.
Ari Kamlani, senior AI technology strategist and systems architect at software firm Beyond Limits, said this is particularly true in critical situations where the cost of being wrong is high, like healthcare, or when ensuring that the AI model is fair to all populations, like in recruitment.
In fact, there are already legal licenses to restrict the use of irresponsible technology, such as the Responsible AI Licenses (RAIL).
Responsible AI, Kamlani said, is an umbrella term that encapsulates many critical principles and dimensions. The Linux Foundation AI & Data, the Institute for Ethical AI & ML, FAT/ML and IEEE A/IS all define the dimensions of responsible AI as:
- Ethics, value-centered AI, trust and safety
- Accountability and transparency
- Bias and fairness
- Security and privacy
- Explainable and auditable
The benefits of responsible AI for business are broad and can range from creating a better reputation to simply complying with the law. But today, Kamlani said, some digital native companies and foundations have emphasized responsible AI as their primary differentiator. Others have adopted certain key aspects of responsible AI throughout their lifecycle, and some may be required for compliance operational reasons.
Related Article: Has Microsoft 365 Been Clinically Tested?
The Ethics of Responsible AI
The topic of ethics and artificial intelligence is not new, but, said John C. Havens, director of emerging technology and strategic development for the IEEE, the discussion is changing, and businesses and policy creators now need to prioritize human wellbeing and environmental flourishing (aka societal value) in the discussion.
When it comes to AI, ethical concerns have largely focused on risk, harms and responsibility, bias against race and gender, unintended consequences, cybersecurity and hackers, among other things. These are, of course, important concerns, but, as Havens said, as AI systems are created, the conversation must also address human-centric, values-driven issues as key performance indicators (KPI) of success to build trust with end users.
Status quo metrics of success focused solely on financial growth fail to address the immediate risks of AI to human and environmental health.
Instead, Havens believes that AI systems should prioritize human wellbeing (specifically, aspects of caregiving, mental health and physiological needs not currently included in the GDP) and environmental flourishing (where flourishing indicates restoration of the ecosystem, not just avoiding harm) as the ultimate metrics of success for society along with fiscal prosperity.
Related Article: AI Governance Is a Challenge That Can't Be Ignored
Omnipresent AI
One of the big challenges with AI today is that is now everywhere, said Tara DeZao, product marketing and director of AdTech and MarTech at Pegasystems.
AI, she said, was once seen as more of a niche technology, but as it becomes more sophisticated and integrated across the enterprise, it’s imperative that businesses pay more attention to how it influences and impacts their customers and society more broadly.
“As AI permeates the technologies and platforms we interact with on a daily basis, businesses must put ethics at the center of their AI strategy,” DeZao said.
Organizations are using AI for critical, subjective decision-making like facial recognition for surveillance or scanning resumes for certain keywords to inform hiring decisions. So it's clear that AI tools either already are or have the potential to amplify bias in a variety of ways, she said.
These functions of AI rely on various sets of information like streaming data and human input, and if oversight is missing from the process, DeZao said companies risk not only relying on skewed outcomes but also seeing that bias leak into every decision they make thereafter.
Learning Opportunities
As a result, she said it's time businesses play a more prominent role in producing AI systems that are responsible, transparent and unbiased. This includes constantly addressing how to stay innovative while providing better, more ethical experiences for customers, as well as preparing for and addressing regulatory changes.
“While all bias can’t be eliminated, we can be proactive about correcting it with the right tools, frameworks and testing to reduce bias and provide transparency into AI’s decision making,” she said.
Related Article: Why Regulating AI Is Going to Be a Challenge
The 3 Elements of Responsible AI
According to Paul Tepper, head of AI at WorkFusion, responsible AI comes down to three things:
1. Data. Models are nothing without data, so companies are stockpiling data, but how do you protect all that sensitive information? Think of the privacy of people’s pictures for facial recognition, health information for HIPAA, personal information for PII, socioeconomic and demographic information, etc.
2. Explainability. Explainable algorithms help elucidate how AI makes decisions. They can either use a black box model or glass box. Glass box models allow you to introspect what the model is doing and interpret how models make their decision, by design. However, most state-of-the-art AI uses deep learning, which is largely black box, but as the name implies, you can’t open black box models to understand their inner workings.
3. Impact on society. AI is increasingly being used to make all types of decisions, some of which have significant impact on society. An app that doesn’t use facial recognition, for instance, can negatively impact user experience, but it has minimal impact on society. On the other hand, if AI is deciding which crops to pick, who gets a loan from a bank or if someone committed a crime, its impact on society is a completely different story.
Tepper said recognizing and addressing these three elements from the start can help organizations mitigate risk, engender trust and reputation, and enhance their products and services to be more responsible.
Related Article: Are Your Risk Assessments Reliable?
The Need for a Framework
At this time, responsible AI refers to a set of guidelines developed for organizations looking to understand how to harness the power of AI in an ethical manner.
Steven Karan, vice president of insights and data at Capgemini Canada, said that, among other things, it provides best practices for building in fairness, interpretability, security to AI solutions while minimizing risk, mitigating biases and driving value.
But there have been many public examples of harmful AI, he said. Organizations large and small have been left with damaged reputations and distrusting customers because they failed to have a strategy and guardrails in place from the onset to ensure their AI solutions were designed, developed and deployed in a responsible manner.
Organizations seeking to create AI models should take pre-emptive measures to ensure their solutions do no harm. To do this, Karan said, organizations must develop a responsible AI framework that aims to prevent any negative impacts on algorithm predictions.
About the Author
David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.