ChatGPT Shows AI Is a Better Co-Pilot Than Solo Driver at Work
“You got to check this out.” My co-worker included a link to a new AI tool in the message.
That tool, ChatGPT, is a chatbot created by OpenAI. If you ask ChatGPT to describe itself, it replies, “I am a machine learning model that has been trained on a massive amount of text data in order to generate human-like responses to a variety of inputs.”
Writing for Reworked, Alex Kantrowitz stated that ChatGPT is, “flawed but could be (the) chatbot we’ve been waiting for.”
The hype is real. ChatGPT is really good. It has scarily human-like responses and can conduct conversations without training on the first try. As a consumer of work technology digital assistants and chatbots, its level of sophistication simply blows away anything I’ve seen.
But as impressive as it is, a few problems remain.
Following the ChatGPT Data Trail
OpenAI hasn’t revealed what precise data it used to train ChatGPT, but the company stated it generally crawled the web, perused archived books, and trawled through the vast digital stacks of Wikipedia.
That's an enormous data set, but a far from perfect one. For example, I asked ChatGPT who was on the 1992 U.S. Olympic Basketball team — also known as the Dream Team. It listed off 11 of the 12 players on the team, but missed Christian Laettner.
Since it’s conversational, I asked it about the missing player. It immediately responded that I was right, it had missed Laettner. I asked if it would update its response with the correct answer. It said it wasn’t possible.
That’s a limitation by design. Not only does it not update based on answers it acknowledges are incorrect, but it also doesn’t have any data from this year and isn’t self-updating. So according to ChatGPT, Queen Elizabeth II is still the reigning monarch for our friends in the UK.
That massive data set makes it harder to incorporate new information and make sense of redundant or conflicting information. Similarly, because of protections learned from Microsoft’s failed experiment with chatbots six years ago, ChatGPT isn’t influenced by your responses outside of your conversation and limits harmful language by default.
Related Article: Are Digital Assistants the New Face of Search?
Learning Opportunities
ChatGPT Needs Supervision, Like All AI
Since ChatGPT is partially trained on web crawls, it can impersonate anyone who writes on the web with the right set of commands. At the urging of a few folks I saw on Twitter, I had it write a LinkedIn post about the value of remote work in a way that mimicked my writing style.
While it turned out alright, it was ultimately pretty generic. Even with nearly two decades of identifiable writing on the web, could I have identified its output out of a lineup as my own? It might’ve captured some of my mannerisms, but it wasn’t something I would publish without some serious editing. Add to that the fact that it would be impossible to write anything regarding recent events with the current iteration of the software.
That relegates one of the most advanced AI tools I’ve seen to the status of useful companion. It is definitely not a replacement for anything I’d want to put my name on.
The application to work technology, which lacks this AI's sophistication, should give organizations serious pause before letting anything with AI run unaccompanied. Even with New York City’s Automated Employment Decision Tool law getting delayed until at least April 2023, AI’s best case at work is one where it runs with supervision from a human who understands what decisions it is making and how.
Sure, the chatbots and assistants in work technology can provide information and even do some more complex tasks like multi-person interview scheduling when programmed the right way. Would I trust it to tell me who among a pool of applicants might be the best fit for a particular role? I can't imagine it having that ability anytime soon. After all, even humans struggle with this every day.
AI technology has made stunning advances across the board and it seems as though the pace is accelerating. Does that mean it’s ready for anything other than strict, limited production environments?
Not yet. But it will probably be a little sooner than we all expect.
Related Article: The Complicated Relationship Between AI and Human Resource Management
About the Author
Lance Haun is a leadership and technology columnist for Reworked. He has spent nearly 20 years researching and writing about HR, work and technology.
Connect with Lance Haun:
Related Stories
Collaboration & Productivity
Meeting Overload Or Not, There's Still Room To Improve Collaboration
Collaboration & Productivity
Zoom Fatigue Continues, 3 Years Later. How Some Businesses Are Responding
Collaboration & Productivity
Want Your Organization to Collaborate More? Social Psychology Can Help