Make Responsible AI Part of Your Company's DNA
First things first: by “artificial intelligence,” I am not referring to our mechanical sentient overlords, but to a growing field of technology tools used to create predictions, improve our understanding of vast amounts of data and help us optimize solutions to problems. AI is not some far-off promise that will happen in the future. You've likely already been directly exposed to some of this technology, for example when you browse through the new show offerings in your online streaming service or when you get a weather forecast. While AI and machine learning have been used for a very long time, the acceleration of digital transformation caused as a result of the pandemic drove increased and broader adoption of these tools.
However, there's a difference between accelerated adoption of a new technology and reckless deployment of one. As is the case with most tools, AI has the potential to unintentionally harm individuals and expose organizations and individuals to risks, risks that could have been mitigated through careful evaluation of potential consequences and making the correct implementation choices early in the process. This idea is the basis of responsible and ethical use of AI.
A survey conducted by the Center for the Governance of AI found 82% of respondents believe that AI should be carefully managed. Moreover, in The 2020 RELX Emerging Tech Executive Report, 86% of business leaders reported that ethics considerations were a strategic priority in the design and implementation of their AI systems. While 86% is a high percentage for ethically responsible AI, what is to be said about the remaining 14% of the systems? How much exposure to individuals and organizations can these systems create? Keep in mind the amplifying effects big data and the internet can have on the reach of any system, which could also increase the collective harm.
Ethical AI is the foundation of successful and impactful AI systems. The European Union has gone as far to establish Ethical Guidelines for AI. This is timely, as demonstrated by a recent survey that reported that two-thirds of internet users believe companies should have an AI code of ethics and review board. But ethical AI is just the beginning. Beyond ethical AI is responsible AI.
The Path to Responsible AI
Responsible AI is a framework of guiding principles applied to AI technologies to ensure goals around ethics, accuracy and productivity are met. More importantly, these principles help mitigate the potential harm to individuals and society. Four foundational elements comprise responsible AI: governance, design, monitoring and awareness training. The latter does not refer to model training, but to making people aware of the most effective ways to leverage AI implementation.
Ethical AI is the cornerstone underpinning responsible AI and operates as an organizational construct that delineates between right and wrong and ensures compliance with applicable laws and ethical principles. And while we are not talking about AGL (artificial general intelligence, a branch of AI that deals with sentient machines), but rather artificial restricted intelligence (ARI), or machines that can learn from data and environment, there is still the possibility of significant harm when ARI is applied recklessly. An example is a system that would unfairly target a specific group of individuals or exploit particular societal constructs to expose groups of individuals to potential harm, financial or otherwise.
AI should always be human-centered. AI, as a tool, needs to help humans and society reach higher goals, and must be supervised by humans to prevent unfairness and bias. Because AI is trained on existing data and environments, and because some of this data can expose or reflect inherent biases, there have been cases of AI learning those undesirable traits, such as when Microsoft developed Tay (@TayandYou), a Twitter chatbot AI, which started as an experiment in conversational understanding, but in less than 24 hours began to generate racist messages. Microsoft turned it off at the end of the day and never turned it back on. While the Tay incident is anecdotal at this point it demonstrates how, without a Responsible AI framework, implicit biases in data are likely to render unexpected and undesirable results.
Unfortunately, human supervision isn't always easy. Several sophisticated AI technologies, such as deep learning networks, can generate models that are difficult for humans to interpret. Because transparency is so important, this aspect in a model is called “explainability” and is a key to thoroughly understanding the behavior of an AI system without resorting to unbounded case by case testing. Explainability provides information on how each of the observed dimensions (features) in the input impact the results and can help rule out implicit or explicit bias, either due to evaluating an unwanted feature or due to using a feature as a proxy to a characteristic of the input that should not be considered during the evaluation of that model.
As the AI system is architected, responsible AI also prescribes best practices on validation and preservation of data integrity and assurance that data stays accurate and consistent throughout its entire lifecycle.
Why Is Responsible AI Important?
Responsible AI is the right thing to do. It helps ensure that any AI system will be efficient, comply with laws and regulations, operate based on ethical standards and prevent potential for reputational and financial damage down the road.
But beyond that, responsible AI can also be viewed as an enabler of technology, rather than an expensive hindrance or a risk-avoidance tactic. Organizations that follow these principles are usually rewarded by having more accurate AI models, less overall waste in their implementation and more predictable outcomes overall. Secondary — but still significant — benefits of implementing responsible AI are increased sales and brand awareness, as well as employee retention.
As an additional data point, the same RELX survey noted earlier indicated that almost nine out of 10 (89%) senior executives believed that ethical standards in the development and use of emerging technologies lead to a competitive advantage for their businesses.
One thing to understand is the ethical aspects of responsible AI are not an isolated initiative, but must be considered in the larger scope of business ethics. Executive management, corporate boards, engineers and researchers all face an obligation to drive ethical principles in their organizations and what they design and build. Company policy is an important aspect of the governance framework, driving ethical principles from the top rather than just leaving individual groups to their own device when it comes to ethical decisions around their implementations.
Related Article: The Next Frontier for IT: AI Ethics
Going Beyond Words: 3 Steps to Ensure Responsible AI
As you embark on this journey, the following three steps can help you structure and prioritize the fundamental tasks involved in implementing responsible AI throughout your organization. While these steps are only the starting point, they will at least help you ensure you are not missing any of the basic components of a responsible AI program:
- Implement Responsible AI Guides: Develop guidelines and policies on how your company is implementing responsible AI and taking this technology seriously.
- Conduct Responsible AI Checks: Design and continually check AI algorithms and data platforms from end-to-end.
- Offer AI Trainings: It’s important to keep upskilling and training employees on how to maintain responsible AI.
Related Article: The 4 Foundations of Responsible AI
Responsible AI Is Simply the Right Thing to Do
Responsible AI should be seen as an enabler of business growth and development. The continuous and ongoing process of maintaining responsible AI requires determination and top-to-bottom governance, driven from the company board and executive management to be successful. Companies who implement responsible AI can see rewards in market growth and a competitive edge, but at the end of the day, responsible AI is simply the right and moral thing to do.
About the Author
Flavio Villanustre is CISO and VP of Technology for LexisNexis® Risk Solutions. He also leads the open source HPCC Systems® platform initiative, which is focused on expanding the community gathering around the HPCC Systems Big Data platform, originally developed by LexisNexis Risk Solutions in 2001 and later released under an open-source license in 2011.
Making the Complex Possible: How to Accelerate Your Digital Transformation
Hear how leading companies are reimagining their digital transformation projects and identifying new opportunities for growth.Watch NowON DEMAND
The Race for Digital Transformation: Employee-Centric IT Against the Odds (with Paddy Power Betfair)
In this webinar, we’ll explore real-world use cases that illustrate the transformational benefits of employee-centric IT.Watch Now
How to Use Space Reservation Tools to Return to the Office Safely
Explore the innovative tools that help make the transition back to hybrid and in-office work seamless.Watch NowON DEMAND
Liberty Mutual: Building a Center of Excellence for Employee Experience
Explore how to implement a cross-departmental center for employee experience and make the biggest impact.Watch Now