After months of coverage about the wonders of generative AI, the idea that this new technology might also carry deep challenges is starting to take hold among developers, business executives and end users.
It is sometimes easy to forget that even the best minds can find their feet in their mouths when they forecast the path of new technology.
In 1977, Ken Olsen, co-founder and president of the once-legendary Digital Equipment Corporation, said, “There is no reason for any individual to have a computer in their home.” He wasn’t talking about the internet, but rather about the idea that one day home computers would manage HVAC, light switches and grocery shopping.
Today, as smart-home products such as Amazon Alexa and Google Home continue to gain traction, it seems that even with the correct context Olsen’s tarot-card skills were a bit cloudy. But in the late 1970s and early 1980s — well before the advent of intelligent homes — an acknowledged genius-engineer was ridiculed for his perceived look backward.
Similarly, in 1995, the astronomer and hacker-hunter Clifford Stoll wrote that “No online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher, and no computer network will change the way government works.” Years later, he admitted he blew that call.
Two technology influencers, two milestone eras in technology, two off-base predictions.
As ChatGPT and other generative AI products surge into the business and consumer worlds, are there lessons we can learn from studying the path between prediction and reality?
We’ve Seen It Before
ChatGPT tells us that sometime before 2021, Mark Cuban argued that anyone who didn’t understand AI was “going to be a dinosaur within three years.” And, during the COVID-19 pandemic, the tech industry pivoted to aggressively pursue AI-driven products as user needs changed.
Both are signs of momentum.
Most of those AI products have been built around a few large language models such as ChatGPT and Google’s Bard, and that makes them more derivative than path-breaking. And though the use of gen AI is certainly growing, and experts, pundits, analysts and observers have declared AI to be an increasingly important part of HR’s arsenal, on-the-ground growth may not be as advanced as many people think.
According to Gartner, as of late June, only 5% of HR leaders said their department had implemented some form of generative AI. Another 9% said they’re conducting some kind of pilot. And more than half were exploring how they can use generative AI although they had nothing in place yet. Fourteen percent had no plans to use the technology at all in the near term.
That doesn’t mean AI use won’t grow. More than 60% of the HR leaders Gartner surveyed are participating in enterprise-wide discussions about the use of generative AI. Fifty-eight percent are collaborating with IT leaders, and 45% are working with legal and compliance functions to explore potential use cases.
Yet, it’s fair to argue that today, we’re seeing with AI what HR saw with people analytics several years ago: many breathless predictions and lots of existing products reimagined as data-based, and workers trying to wrestle their arms around what it all meant for them.
Related Article: Is It Too Early for HR Solutions Providers to Jump on the AI Train?
What Employees Think About AI
Signals are mixed about AI’s possibilities on the employee front. While the ability to automate mundane tasks can be appealing, a survey by Wiley found that workers also don’t want the technology to take over their learning and development activities. More than half of the study’s respondents preferred having a human instructor in charge of their workforce development, as opposed to 7% who said AI would be better.
It doesn’t help that corporate leaders seem to be enamored with AI. In a survey of Fortune 500 CHROs, research firm Gallup found that 72% see AI replacing jobs within the next three years. Meanwhile, almost one-third of those surveyed by Wiley said their business has already adopted AI technology in at least one business function.
No matter what employees think, more employers are exploring the use of AI to optimize their training and career development programs. And although the majority of respondents told Wiley they want L&D content to be developed by subject matter experts, the Paychex 2023 Pulse of HR Report predicts a continued reliance on technology, such as AI, to enhance upskilling and reskilling initiatives.
Related Article: Generative AI Is Cool, But It Isn't Corporate (Yet)
When Bad Things Happen to Good Intentions
Amazon’s early experience with machine learning helps demonstrate why employees are concerned about generative AI today.
In 2018, after four years of development, the company pulled the plug on a machine learning-based recruiting tool after discovering that, as Reuters reported, it didn’t like women.
The system was meant to review resumes and identify the top candidates for any given role. Somewhere along the way, however, the idea that AI engines rely on historical data to learn got lost. In Amazon’s case, the system compared applicants against patterns found in resumes submitted over a 10-year period. Since so many more men than women make up the tech workforce, the application inevitably machine-taught itself that male candidates were stronger than their female counterparts.
We don’t need to dwell on the fact that Amazon created a tool that was intended to minimize bias and ended up falling victim to it. The company certainly wasn’t the first to find itself in the middle of that particular swamp. But since Amazon is Amazon, whenever it crashes, it crashes hard. In this case, the lesson isn’t particularly startling: The results of machine learning are only as good as the data that feeds it.
“Business people have been sold on the notion that AI’s advanced algorithms magically analyze information in a black box and then spit out reliable insights. How? They just do,” observed John Harney, co-founder and chief technology officer of New York City-based DataScava, whose solutions work with unstructured data. “But really, machines only work when humans review their work and teach them how to provide better results.”
Related Article: Generative AI Writing Job Descriptions: Adult Supervision Required
Drifting Into the Unknown
The world of AI is still today the world of unintended consequences.
Recently, researchers from Stanford University and the University of California, Berkeley, found that ChatGPT’s performance with certain basic math operations has markedly declined since it was launched. They attributed the decline to “drift,” which occurs when attempts to improve one part of an AI model adversely impact the performance of others.
“Changing it in one direction can worsen it in other directions,” Stanford Professor James Zou, one of the report’s authors, told The Wall Street Journal in August. “It makes it very challenging to consistently improve.” That means, he added, that AI systems need to be monitored very closely.
Generative AI’s hiccups shouldn’t be surprising. It’s a new technology that was released to the public early in its lifecycle.
But for CHROs, the question is whether they can look at the experiences of Amazon, Olsen and Stoll and anticipate the traps that might be lurking for their organization. None of their statements or actions were dumb; they were just early. This is a time when it’s especially useful to ask “What if?”