Generative AI Results Should Come With a Warning Label
Generative AI is the term applied to the part of AI used to create new text, images, video, music or even computer code. It's what's behind ChatGPT's ability to generate high-quality conversational text, the driver of its current fame.
Of course, ChatGPT does more than provide useful summaries of online topics. It can respond to requests for pieces of code in popular programming languages, which for novices like myself, can be quite useful. Amusingly, a harmless request for a report on major sporting news to Microsoft’s Bing search engine (recently embellished with ChatGPT functions), confidently reported in detail on the Super Bowl final four days before it actually happened. Hindsight now tells us that the report was wrong, both on the winner and the score, among other things. But one could see how a future internet search could surface many confusing results, being presented as factual.
Moving beyond the practical to the arts and humanities, you can ask it to write poems or musical lyrics “in the style of ... put your favorite musician here." For me, the results were amusing. My request for some prose in the style of Bob Dylan had sprinklings of “the times they are a-changin'.”
However, if it is an artist that you are fond of, or in fact you are the artist, the results can be offensive. Singer and songwriter Nick Cave made no secret of his disdain for the technology and its attempts to mimic songs “written in the style of Nick Cave.” He described it as "a grotesque mockery of what it is to be human." No doubt Nick Cave feels that generative AI has crossed the ethics line, when it is possible for such machine-generated works to potentially be passed off as his own.
Who Hasn’t Googled Themself?
Even if you don’t have a big online profile, most of us are curious what is actually available on the internet about us. Thanks to a combination of a 19th Century Chinese ancestor and an inventive Australian immigration officer, I have a mashup surname that is distinctly unique. I thought I would take advantage of the recent Google Chrome ChatGPT extension to see a conventional Google vanity search side-by-side with a ChatGPT result:
The conventional Google results were predictable, surfacing verifiable sites where I have a profile. On the right-hand side, we can see the ChatGPT results. It starts well enough by suggesting my profile was not large enough to surface within its huge training sets (Clearly I'm no Nick Cave or Bob Dylan). Then something curious is added: a claim that someone by my name is the co-founder and CEO of a blockchain start-up called “BLOCKLOAN.” A Google search identifies the true co-founder and CEO has a name nothing like mine. It is Australian-based, as I am, and is a digital start-up, like my company SWOOP Analytics is, which gives the impression that the result may not be totally random. Nevertheless, it's complete fiction!
In a previous article I criticized the inability of ChatGPT to provide sources. This has changed somewhat now. When asked to provide sources I received a set of bullet points. Predictably, the first was my LinkedIn profile. The remaining points started with:
"Laurence Lock Lee on Blockchain's Evolution and Enterprise Adoption" (Article on Medium.com): https://medium.com/@ccr101/laurence-lock-lee-on-blockchains-... This article provides an interview with Laurence Lock Lee where he discusses his views on the current state and future of blockchain technology.”
Learning Opportunities
This is completely fictitious. The link does not exist. The remaining sources were also fictitious with empty links. So what is worse? No sources or ones that are totally fictitious, but framed as factual?
Related Article: What ChatGPT in Microsoft 365 Could Spell for the Workplace
It Is Time for Generative AI to Put 'Fiction' or 'Non-Fiction' Labels on Its Results
One of the most common criticisms of generative AI is the confidence with which it reports its often-erroneous results. Most of us are sensitized to the potentially misleading information found on the internet. Google even helps with this by labeling sponsored advertisements.
But it appears that the generative AI platforms are aware of just how much “generation” they are undertaking. When used as a general search engine, ChatGPT will use its generation capabilities to generate a linguistically smooth summary of the results it has found. When asked to create something like a piece of poetry or music, the end user is aware that it is a creative piece. The problem occurs when some of these creative functions are applied to factual requests like “who is ....?”
It is common practice for authors to now label their content as "historical fiction," “Inspired by a true story,” “Based on the life of,” — all of which gives the consumer some sense of the fiction/non-fiction divide.
Is it time now for generative AI platforms to adopt a similar practice?
Learn how you can join our contributor community.
About the Author
Laurence Lock Lee is the co-founder and chief scientist at Swoop Analytics, a firm specializing in online social networking analytics. He previously held senior positions in research, management and technology consulting at BHP Billiton, Computer Sciences Corporation and Optimice.