We Need Ethical Artificial Intelligence
Artificial intelligence (AI) is doing what the tech-world Cassandras have been predicting for some time: It is sending out curve balls, leaving a trail of misadventures and tricky questions around the ethics of using synthetic intelligence. Sometimes, spotting and understanding the dilemmas AI presents is easy, but often it is difficult to pin down the exact nature of the ethical questions it raises.
We need to heighten our awareness around the changes that AI demands in our thinking. If we don’t, AI will trigger embarrassing situations, erode reputations and damage businesses.
Positive and Negative Results From Using AI
Two years ago, Amazon abandoned the AI tool it used to recruit employees. The tool, which the company trained using resumes submitted to the company over a decade, preferred male applicants. Recently, Twitter apologized for deploying an image cropping AI which preferred white faces over black. These are embarrassing (and unforgivable) outcomes of AI, but the ethical implications are clear.
By contrast, the example of a South Korean national broadcaster, SBS, using AI to render songs in the voice of folk-rock singer Kim Kwang-Seok is delightful but considerably more complex. The popular singer has been dead for 25 years, yet continues to have a large fan following. SBS used 20 songs by Kim Kwang-Seok as a training tool and another 700 Korean folk songs to sharpen the accuracy of the AI. The AI now mimics any song in Kim Kwang-Seok’s style. A song, originally by Kim Bum-soo, rendered in the voice of Kim Kwang-Seok using AI, aired late in January. It was so perfect that it brought tears to the eyes of Kim Kwang-Seok fans. Music executives on the other hand were baffled: Who should the work be attributed to? Who owns the copyright for the work? Who will be paid royalties for the work? Will it be the AI programmer? The producer? For the curious, SBS paid a one-off fee to Kim Kwang-Seok's family for borrowing his voice in the show. But publishing the song commercially presents perplexing questions.
Tomorrow’s songs need not necessarily be written by humans either. OpenAI's text generators, like generative pre-training 3 (GPT-3), could use deep learning/machine learning to write original songs that appear to be penned by Kim Bum-soo or any other song writer. This opens limitless possibility to continue to produce work by an artist long after their death. Could this mean that AI can write and direct "2050: Beyond the Future" to keep alive the cinematic magic created by Arthur C. Clarke and Stanley Kubrick with "2001: A Space Odyssey"?
GPT-3 has the potential to do that. Last June it sent powerful waves across the AI community when Sharif Shameem, the app development head of a startup, used it to construct a program by simply describing a UI in plain English. GPT-3 responded by spitting out JSX code. That code produced a UI matching what Shameem wanted. Shameen said, “I only had to write two samples to give GPT-3 context for what I wanted it to do. It then properly formatted all of the other samples.”
GPT-3 doesn’t only reproduce “stuff” like humans. It is a performer as well. In one instance, it was given code in Python and asked to describe what the code does. The program not only did that, it also offered improvements and suggestions on where to post it after the improvement. GPT-3 can identify paintings from descriptions and recommend books. It can write entire articles for publications. In one instance, GPT-3 managed to express a bunch of popular movies in emoji. The extraordinary part? GPT-3 requires no training. It uses 175 billion parameters (by comparison, the closest anything comes to GPT-3 is Microsoft's Turing NLG, which uses 17 billion parameters) to generate text that sounds human. You could use it to write your next quarterly report and save some valuable time.
Related Article: GPT-3 Is Impressive, But it Isn't Artificial Intelligence
Making the Complex Possible: How to Accelerate Your Digital Transformation
Hear how leading companies are reimagining their digital transformation projects and identifying new opportunities for growth.Watch NowON DEMAND
The Race for Digital Transformation: Employee-Centric IT Against the Odds (with Paddy Power Betfair)
In this webinar, we’ll explore real-world use cases that illustrate the transformational benefits of employee-centric IT.Watch Now
How to Use Space Reservation Tools to Return to the Office Safely
Explore the innovative tools that help make the transition back to hybrid and in-office work seamless.Watch NowON DEMAND
Liberty Mutual: Building a Center of Excellence for Employee Experience
Explore how to implement a cross-departmental center for employee experience and make the biggest impact.Watch Now
The Danger of Deep Fakes
There are obvious social dangers in deploying AI like this, the most direct being bad training data used by machine learning systems leading to the Amazon recruitment breakdown or the Twitter image cropping fail. But worse lurks around the corner. It is easy to use capabilities of the type used by the Korean broadcaster and those of GPT-3 to produce deep fakes.
Deep fakes have become a terrifying reality. They can impersonate politicians, sports stars, newscasters, customers, officials, teachers and even pets, to launch disinformation campaigns and misguide people into taking the wrong decisions, discrediting public figures and more, and the result is widespread distrust. Last year, Facebook alone released 100,000 deep fakes produced by 3,426 actors using a range of existing face-swapping techniques. Yet the same technology is also being put to good use, such as creating corporate training videos delivered by anyone in the language of the recipients.
Related Article: IBM and Microsoft Sign 'Rome Call for AI Ethics': What Happens Next?
The Challenge of Using AI in Business
AI is changing business and the way it is conducted. We see it being put to good use, optimizing power consumption in homes, which makes a small but important contribution to turning the tide on climate change. It is also helping researchers and doctors working with cancer patients by analyzing data from a global community of patients.
But the diverse use cases for AI present a growing number of challenges to businesses, in addition to regulators, law agencies and policy makers. At a deeper level they raise ethical and moral questions about how technology is used in a fair and just manner.
There are two issues here to call to the attention of technology providers: First, it is crucial to reduce bias by identifying and using training data that is trustworthy and next, using an ethical framework for AI models to protect people from harm and preserve the integrity of organizations.
About the Author
Kalyan Kumar (KK) plays the role of Global CTO & Head - Ecosystems for HCL Technologies. He is actively involved in Product & Technology Strategy, Strategic Partner Ecosystem, Startup incubation, Open Innovation/Open Source, Enterprise Technology Office and supports the company’s organic/inorganic initiatives.