ChatGPT: A Human Face Without a Human Mind
My first exposure to ChatGPT was when I started seeing posts from some of my long-term knowledge management connections, singing somewhat nervous praises about its ability to respond to typical thorny issues like: “how do you get leaders to support knowledge management?” As a former AI researcher from the 1980s and '90s, I had developed a healthy cynicism for what AI could really do. In those days the “go to” technology was expert- or knowledge-based systems, a technology which promoted transparency in decision-making advice.
Artificial neural networks (ANN), which is a machine learning technology and the basis of modern AI (and indeed ChatGPT), was relatively new then. Because ANNs are “black box” systems, i.e., make no real attempt to explain their reasoning, they competed with any other signal processing systems (there are many) that aim to predict an output from multiple input signals. Why ANNs and ChatGPT particularly are now thriving is their ability to communicate results in a very human way.
My immediate reaction when exposed to ChatGPT’s responses to non-trivial questions on knowledge management, was to reflect on my time as an adjunct knowledge management lecturer to hundreds of post graduate students looking to learn more about knowledge management. Having assessed hundreds of student assignments, I could easily see myself providing a pass mark to a ChatGPT generated response. Not a high mark, as I reserved those for students who could surprise me with some of their own insights and experiences. But nevertheless, I was concerned about how educational institutions might respond.
The ChaptGPT Emperor Has (Almost) No Clothes
From even the earliest days of AI, the Turing Test has been promoted as a way to identify a system that is truly artificially intelligent. In essence, the Turing test provides the evaluator with two responses to a question posed: one from a real human and one from a system. If the evaluator is unable to reliably tell the difference, then the system is deemed to be artificially intelligent.
Well, it turns out that the Turing Test sets a pretty low bar. The early AI textbooks liked to extol the virtues of a system called Eliza, an early natural language processing computer program created between 1964 and 1966 at the MIT Artificial Intelligence Laboratory by Joseph Weizenbaum. The system simply reframed the question before responding back to the questioner, with the express purpose of providing the illusion of a human responder. Its success even surprised its creator. Therefore, ChatGPT’s success should not be surprising more than 50 years later. What is now important is to look beyond the illusions to identify what “real” practical value can be achieved. Even Eliza, with its primitive training set of a single question, was reported to have provided comfort to some of its early users.
Related Article: What ChatGPT in Microsoft 365 Could Spell for the Workplace
A Key Shortcoming of ChatGPT
A key missing piece from ChatGPT responses is a lack of citations for its sources. Citations are the bedrock of academic work. Citations reflect the hierarchical nature of knowledge discovery and development, with each new article building on the identified work of its predecessors. The inability of ChatGPT to expose its sources and, therefore, the consequent credibility of such sources, is a significant shortcoming in my view. Even ChatGPT admits as much when I posed the question: “Can ChatGPT identify what sources are credible or not?” with its response:
“ChatGPT is a machine learning model that is not able to determine the credibility of sources on its own. It can only provide responses based on the information it has been trained on. However, the model's responses can be used in conjunction with other methods to help determine the credibility of a source. It's important to keep in mind that the information used to train the model may contain misinformation, inaccuracies, or bias. It's always recommended to verify information from multiple credible sources and be critical of the information provided by any source, including ChatGPT”
I was encouraged by this open and frank response, along with some pragmatic advice to overcome its shortcomings. No gilding the lily here!
So what might be these other methods for validation? Let’s refine the question:
“What methods are available for assessing validity of sources?” The response?
Learning Opportunities
"There are several methods for assessing the validity of sources, including:
- Examining the author's credentials: This includes looking at their education, experience and other qualifications related to the topic they are writing about.
- Evaluating the publisher: Is the source published by a reputable organization or publisher? Are they known for publishing accurate and reliable information?
- Checking for bias: Is the source presenting information in a balanced and unbiased manner, or is there a clear bias or agenda?
- Verifying facts and statistics: Are the facts and statistics presented in the source accurate? Can they be verified through other reputable sources?
- Considering the date of publication: Is the information in the source still relevant and up-to-date?
- Analyzing the evidence: Is the evidence presented in the source credible and robust?
It's important to note that one should not rely on just one method to evaluate the validity of sources, but rather use a combination of methods to get a more accurate and complete understanding of the credibility of a source.”
Again, a useful response from ChatGPT, but the end of the line in terms of further inquiries. Authors and publishers are not exposed by ChatGPT!
We could characterize ChatGPT as a “system that can provide a response to questions by scanning the internet and framing a summarized response in a human-friendly form.” On its own this can be an invaluable research tool in identifying the current “status quo” on a given topic, warts and all. It can also facilitate a cycle of iterating the research questions with more refined follow up questions.
For many of my former students, this is likely precisely what they were doing. Perhaps enough to achieve a pass mark (maybe I was a soft touch!). But if I assumed they had to read the content in order to summarize it, then hopefully they learned something.
Of course, if the student looks to short circuit this process by typing the assignment question directly into ChatGPT and submitting the results, this is plagiarism. The “go to” tool for detecting plagiarism in student contributions is Turnitin. Turnitin not only has a huge library of published works, but also prior assignments submitted by other students. I’m pleased to report that Turnitin is applying its own AI resources to detect ChatGPT contributions in a "battle of the AI engines."
Related Article: When Personalized Enterprise Search Results Are Hidden in a Black Box
What Should You Use a Mindless ChatGPT for?
In my view ChatGPT has the potential to be the next generation of search engine. Today when we type a question into our favorite search engine, we are faced with a plethora of information links (usually headed by sponsored links). It can take some time to peruse these links, identifying the key points of some, dismissing others, to come up with a mental picture of a response to your question. When we use a search engine, our expectations are a “status quo” response i.e what is out there that speaks to my question. We accept the variable quality of responses. We don’t need to stop on the first iteration though. Unsure about the initial response? Refine your questions and use ChatGPT to validate itself!
Given the estimates that 90% of all data in the world today was created in the past two years, mindless or not, ChatGPT and the like will have an important place to play in our growing knowledge-based economy.
Learn how you can join our contributor community.
About the Author
Laurence Lock Lee is the co-founder and chief scientist at Swoop Analytics, a firm specializing in online social networking analytics. He previously held senior positions in research, management and technology consulting at BHP Billiton, Computer Sciences Corporation and Optimice.