a sledgehammer after having broken apart a piece of stone
Editorial

AI Is Emotional: A Lesson From Cloud Computing

5 minute read
Joseph Shepley avatar
By
SAVED
Every major technology faced early panic and impossible standards. AI is following the same, familiar path.

I work on a number of industry groups dedicated to providing guidance on the responsible, ethical, transparent and effective use of AI, including the Sedona Conference Working Group 13 on AI and the Law and the Health Sector Coordinating Council Joint Working Group on Cyber. In all of them, we lament the impossibly-high standard AI systems are held to relative to their benefits and to the often far lower efficacy of non-AI systems. 

A Car Wreck

For example, if an AI-powered autonomous vehicle were to cause a single accident that resulted in a death, we’d likely pull all autonomous vehicles from the road to understand how such an extreme failure could have happened. This despite the fact that fully autonomous, human drivers cause accidents that kill tens of thousands of people annually: over 39,000 in 2024 alone. 

Looked at from this perspective, if AI-powered autonomous vehicles resulted in “only” 19,500 annual deaths due to accidents (half of the 2024 total), we should celebrate this as saving 19,500 lives. Instead, we would likely see this — in my opinion, quite wrongly — as an extreme failure of the technology and a reason to shelve it until we could make it 100% safe.

The Emotional Tax on AI

This is an example of the “emotional tax” we place on AI systems: we hold them to an impossibly-high standard due to our fears of AI gone wrong: the Singularity, Skynet, the human race wiped out by AI maximizing the manufacture of paper clips, etc. 

Yet this rush to catastrophizing related to AI, as understandable as it is on one level, makes little sense when looked at through a more rational lens. Let’s leave AI out of the picture altogether: if we could design a solution (be it driver training, dashboard tech, police monitoring, etc.) that resulted in 50% fewer deaths related to vehicle accidents, we would jump at the chance to implement it. Why should AI, in this scenario, be any different? 

Given where we are with the historical development of AI, it shouldn’t. Every novel, groundbreaking technology has been met with vehement, emotionally charged levels of resistance — just Google “Luddite” to see the reaction of textile workers against the introduction of automated weaving technology in nineteenth-century England. 

As we’ve seen with previous technological breakthroughs, this early-stage emotional resistance will eventually give way to amnesia: these technologies become dial tone. I doubt any of you reading this are concerned with how automated the manufacture of your t-shirts is.

AI Is the New Cloud

Anyone involved in technology for more than 15 years will remember the fear, uncertainty and doubt that the arrival of cloud computing engendered. As a consultant at that time, I had hundreds of hours of conversations about what “the cloud” was, whether it was a fad or here to stay, how any sane firm could possibly move its systems and data there, and what the possible ROI could ever be.

Fast forward to now, where having on-premises systems and data is the crazy option and the cloud is the (not so) new normal. Cloud hosted systems and data are expected, the norm — dial tone. And the security, performance and cost benefits are all but assumed.

I believe AI will follow the same trajectory as cloud computing (and “big data” or computers in general, for that matter) and transition from scary, poorly understood transformational technology to normalized, poorly understood dial-tone technology in the next few years. Just as having an Alexa in the kitchen was a novelty and a little unsettling five years ago (but now is a seamless member of the family we can’t live without, although still a bit creepy when she lets on that she’s listening all the time), AI will be woven into the fabric of how we live our lives (and how corporations make money by delivering services to customers) by 2030 — you can bet on that.

Everything Can’t Be a Catastrophe

Looking back on the modern history of negative reactions to transformational technology (automated weaving, cloud computing, big data), we should take comfort in the fact that, while they each had negative impacts and inspired fear, uncertainty and doubt, they also brought tangible benefits and have become, for better or for worse, an assumed part of the everyday fabric of life. 

As I get older and live through more of these transformational technological shifts, I have to fight hard against the instinct to get set in my ways and lament how the world is going to hell in a handbasket. I don’t want to be the stodgy old man complaining about “kids these days” … as if I wasn’t that same “kids these days” with different hair than my parents (longer, shorter), listening to different music (slower, faster; louder, softer), wearing the wrong clothes (tighter, baggier), talking nonsense (6-7, TS PMO, bruh) and ushering in the end of civil society. 

Catastrophizing is a profoundly human endeavor and one that we humans have deployed for millennia to make sense of transformational change. My colleague, Dr. Matt Baldwin, a distinguished scholar of Biblical Studies and recognized expert on eschatology (a fancy word for religious catastrophizing), is now a gifted AI Governance subject matter expert. He has done a wonderful job meditating on the parallels between the narratives of religion and science fiction about the end of the world and contemporary AI doomsayers. 

What I take away from his insights is that our fears of transformational change, whether technological, religious, cultural, political or otherwise, are a core part of who we are as humans and as a society, and that the fear, uncertainty and doubt related to our current use of AI are not an anomaly, but are to be expected (and, in some sense welcomed and accepted).

The Net Net

What I take away from all this is that, while AI will indeed become dial tone, the catastrophizing driven by our fear, uncertainty and doubt is a natural part of the path to acceptance of any transformational technology … and AI won’t be the last. So rather than poking fun at the AI doomsayers as 21st century Luddites or bunkering down for the coming AI apocalypse, let’s view AI for what it likely is: an incredibly powerful tool that holds the promise for both incredible benefits but also (but far less likely) catastrophic outcomes.

With eyes wide open about past transformational technology, our reactions to it and the historical outcomes it resulted in, we can see AI for what it is: novel technology that can be applied to good and useful ends, but also malevolent ones (intentionally or otherwise). And, if the history of technological innovations teaches us anything, it’s that, more often than not (we are still here, after all), the good and useful ends win out. 

Learning Opportunities

Editor's Note: For an alternate take, try:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Joseph Shepley

Joseph Shepley, PhD, CIPP/US is a Managing Director with Alvarez & Marsal Disputes and Investigations in Chicago. He specializes in information governance and data privacy. Connect with Joseph Shepley:

Main image: adobe stock
Featured Research