Digital Literacy Is the Antidote to Poor Information Quality
At last count (2020), over 100 billion messages are exchanged on WhatsApp every day, including 175 million messages with business accounts. Add to that the almost 350 billion emails sent every day and it's easy to see why we are overloaded by too much information. And these are only two of the many sources of information we process every day.
But information overload is more than too much information. We also experience overload when there's insufficient time to process the information on hand. In fact, we can experience information overload even when processing very little information. For example, in a hospital emergency room doctors need to quickly evaluate minimal amounts of data to devise a trauma patient treatment plan.
Beyond the amount of information and the time to process it, a third factor can also create overload — information quality. Poor information quality occurs when information is hard to understand, making it hard to process. For example, when information is spread across multiple sources (and in multiple formats), finding it, and constructing a coherent picture from the various pieces can be cognitively difficult, which leads to overload.
Case in point: a 2022 industry survey found that over 25% of US workers use 11 or more apps during a work day. Subsequently, 80% of global workers suffer from information overload due to "siloed data in too many places." Toggling between these information siloes creates overload … and we do this operation a lot. A 2022 study published in the Harvard Business Review found workers toggled between screens and apps roughly 1,200 times each day, or two and a half times every minute. That’s a lot of time looking for information. A 2020 survey commissioned by Microsoft found employees spend four to six hours per week searching for information, accounting for almost six weeks of lost productivity every year … lost to "searching or recreating information." The numbers were even higher for executives.
One way to improve information quality is to consolidate information using the new AI capabilities announced by Microsoft for Bing and Google adding AI to Search (Bard). These new tools enable users to perform a search, where information from multiple sources magically appears in one place, aggregated into a coherent stream of text as the result. No more toggling, no more searching. Information quality will soar with AI doing all the heavy lifting of combining synthesizing all the disparate sources of information.
Awesome! What could possibly go wrong?
Related Article: Generative AI Results Should Come With a Warning Label
Will AI Degrade Information Quality?
While these new AI tools will certainly reduce the burden of searching for information, can they really improve information quality? That depends on the data set the AI engine uses to ‘learn’ about the world. Today, that data set is the internet, which (shock!) includes many inaccuracies and falsehoods. And then there is the need to correctly connect the information dots from multiple source together.
The 60 Minutes episode “The new world of AI chatbots like ChatGPT” highlighted this point by showing how incorrect details about the reporter’s background appeared during a ChatGPT session. This isn’t an isolated case. I did a similar bio search about myself. ChatGPT said I worked for a company that doesn’t exist, that I wrote two books I have never heard of, and that I co-founded a company that I didn’t.
Related Article: ChatGPT: A Human Face Without a Human Mind
Won’t AI Technology Get Better?
Inevitably, this AI technology will get better, but will it ever be able to tell the truth from falsehood? The determination is not always straightforward. Sometimes there is no single right answer. Today, ChatGPT generates an answer that covers all bases, giving generic answers that considers all angles, even those that are less credible.
Before ChatGPT, digitally literate people invested the time to explore reputable sources and craft an informed opinion. But AI technologies like ChatGPT will play upon our tendency towards laziness. More often than not, many of us will opt for an AI response to save valuable time and effort. But once these responses get posted online, they become part of the dataset from which the AI engines will learn in the future … ad infinitum. As incorrect content created by AI proliferates, the new data set from which AI learns will become even more polluted. The danger is that the public pool of information may become so littered with bad information, we won’t know what’s true anymore.
One recent improvement in ChatGPT to reduce this concern is its ability to provide the sources from where it generated the answer. So far, it’s not great. I asked ChatGPT a question about digital literacy and it provided five reputable sources, with links.
- The International Society for Technology in Education (ISTE)
- The National Education Association (NEA)
- The Oxford English Dictionary (OED)
- The Pew Research Center
- The American Library Association (ALA)
All great sources. However, when I clicked on the links, the ISTE, NEA and OED links were broken, the Pew link pointed to a seemingly irrelevant Internet/Broadband Fact sheet, and the ALA link led to an article about “library operating expenditures.” The technology will inevitably get better, but even when it does, I believe the solution to improving information quality is not (only) more technology.
Learning Opportunities
Related Article: (I Can't Get No) Search Satisfaction
Improve Information Quality Through Digital Literacy Training
Facebook, YouTube, Twitter and other content sources use AI moderation and human curation to remove posts that violate policy guidelines, some of which include misinformation, like false medical claims. As hard as they try, ultimately, it is impossible to implement iron-clad curation solutions that ensure information posted is accurate, so preventing information pollution that will power AI engines is only part of the solution.
One problem to eliminating information pollution is deciding what is true. Sure, some historical facts are (almost) universally accepted, like the dates Alexander the Great lived (356-323BCE) or when Apollo 11 landed on the moon (July 1969). But consider contemporary issues like abortion, the Ukrainian-Russian conflict or the wisdom of Brexit — these topics are rife with nuances that make it impossible to produce a universally accepted answer.
That’s where digital literacy comes in.
According the American Library Association, digital literacy is “the ability to use information and communication technologies to find, evaluate, create and communicate information, requiring both cognitive and technical skills in locating and using information and in critical thinking.” In short, being able to search, find, consume and critically analyze information from multiple sources is the skill we need to develop to ensure we don’t fall victim to runaway AI-empowered information pollution that will destroy information quality.
At face value, it sounds like we are in a good place. "Roughly three-quarters of public schools (72%) reported that they provide training on digital literacy for their students, and 25% provide digital literacy training to their students’ families," according to a National Center for Education Statistics press release from September 2022. Unfortunately, much of this effort is concentrated on providing broadband internet access and training for productivity software and operating video conferencing software, according to a post from the Rockefeller Institute of Government. That bar for digital literacy is far too low to address the information quality problem.
There is a dire need to move to the next stage of digital literacy: to teach children (and adults) the skill to read critically and to challenge the sources they see online and offline. With today’s fragmented information sources, coupled with the proliferation of photoshopped images and deep fake videos, the skill to dissect, question, and analyze what we see and hear is a critical skill for navigating the increasingly noisy world around us. This will become even more true in the world of large language models than it has ever been before in our history.
By increasing our ability to discern what is real from what is false, we will also improve the quality of information online and off. That will improve the ability of AI to provide reputable information … and maybe even reduce information overload in the process.
Disclaimer: this article was not inspired by, written by, edited by or proofread by ChatGPT or any other AI product or software. It’s pure, unadulterated, human generated content. Enjoy it while you can.
Just for kicks — here is a ChatGPT version of that disclaimer: “This document was not written or assisted by AI or ChatGPT. It was solely created by a human author without the involvement of any artificial intelligence or language model.”
Learn how you can join our contributor community.
About the Author
David is a product expert with extensive experience leading information-intensive technology organizations. His specialty is helping organizations “do it right the first time”— get to market quickly and successfully through a structured process of working closely with design partners from day one.
Connect with David Lavenda: