person whose face is obscured by shadows
Editorial

AI Is Already Human-Centered. Maybe That's the Problem

3 minute read
Adi Gaskell avatar
By
SAVED
The human effort behind AI's "thinking" gets lost in tech hype. Recent research argues we must revise our views of the relationship between humans and AI.

As concerns about the rising power of artificial intelligence have grown in recent years, the "human-centered AI (HCAI)" movement has emerged to try and counter these concerns. HCAI aims to ensure that AI develops in a way that prioritizes human needs and values to augment rather than replace us. The movement also addresses the very real social, cultural and ethical implications associated with the technology.

A recent paper by Olivia Guest of Radboud University casts doubt on this noble-sounding movement. The paper suggests we need to rethink what we associate with HCAI, as the current definition focuses too much on technical features.

A Socio-technical Relationship

Instead, Guest argues that AI is fundamentally a socio-technical relationship, with our cognitive labor either enhanced, displaced or replaced by the technology. When we view things in this way, it's impossible for AI not to be human-centered, because it can't function without human input, cognition and oversight, even if the human role isn't always apparent.

"AI is human-centric, not because it behaves like or is designed to be like humans, but because it requires a ghost in the machine, often literally an obfuscated human-in-the-loop to properly function," Guest writes.

The paper contends that every AI system should be assessed according to which of these three forms of relations with humans they have:

  • Enhancement, whereby AI is capable of augmenting our capabilities.
  • Replacement, whereby AI largely substitutes for humans.
  • Displacement, whereby AI shunts humans out of cognitively rewarding work, such as we're seeing at the moment, with AI doing a lot of thinking work for us.

Determining the AI-Human Relationship

Guest proposes two steps to understand which of the three relationships any particular AI technology falls into.

  1. Discern the relationship — This stage tests whether the technology is related to cognitive tasks or not. It's reasonable to assume that most AI technologies currently on the market do this.
  2. Characterize the relationship — This is when you hone in on whether technology replaces humans, enhances humans or displaces humans.

This may seem like a small step, but Guest argues that many modern AI systems contain a huge amount of human effort that's often hidden from the end user. She's referencing the ghost workers operating in the shadows, who annotate images for a few cents a time, as well as the huge numbers of people who develop the models LLMs rely on or curate the datasets they're trained on.

Guest argues that this obfuscates the amount of human endeavor behind modern AI systems so comprehensively that many people are fooled into believing that the AI is thinking for itself.

Flawed AI Benchmarking

The paper also takes aim at the performance benchmarks often used to highlight the success of AI in the workplace. These benchmarks typically compare man versus machine, but seldom achieve meaningful equivalence. Indeed, the very act of comparing humans to AI is scientifically and logically unsound.

Ultimately, it's unlikely that truth will simply fall out of the data. Instead, it’s shaped by human thought and our interaction with the world. That may be an uncomfortable idea in a world awash with hype around AI's supposed capabilities, but the truth is that parts of human cognition cannot be automated.

No AI systems can think for us without us playing a huge role, and there probably never will be. We might long for such a machine, but only living things are truly self-sustaining. Machines are not. A computer doesn’t decide to turn itself off and on again.

To get past the confusion between correlation and understanding, we must let go of the belief that better performance on benchmarks somehow leads to insight. No stack of test scores can ever amount to a causal explanation.

Learning Opportunities

To de-fetishize AI, we must see it for what it is: a set of relationships between humans and their tools, in which it appears that thinking has been offloaded. But that appearance requires careful analysis. The human mind still sits at the center of it all.

We may not be able to remove the human from the machine. But we can stop treating the human as a ghost. Instead we should view humans as the ones that are, and always have been, at the very heart of modern AI.

Editor's Note: What other questions should we be asking about AI's role in our workplaces?

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Adi Gaskell

I currently advise the European Institute of Innovation & Technology, am a researcher on the future of work for the University of East Anglia, and was a futurist for the sustainability innovation group Katerva, as well as mentoring startups through Startup Bootcamp. I have a weekly column on the future of work for Forbes, and my writing has appeared on the BBC and the Huffington Post, as well as for companies such as HCL, Salesforce, Adobe, Amazon and Alcatel-Lucent. Connect with Adi Gaskell:

Main image: Malik Earnest | unsplash
Featured Research