girl with hand in robot's
Feature

Is AI Biased Toward Younger People?

5 minute read
Michael Shmarak avatar
By
SAVED
Digital ageism is a thing — and I can thank my mother for opening my eyes to it.

Most of you have probably experienced a situation where an aging parent, sibling or friend has reached out for some kind of tech support. My 82 year-old mother calls me weekly to troubleshoot something. 

In my latest attempt to help mom, though, I realized a fundamental issue that is shaping the future of artificial intelligence and machine learning models: they’re made with young people in mind.

AI's High Susceptibility to Bias

When you shop for new computers today, you'll find many of them have AI either built in or supported. Teaching my mom about AI will be a Herculean challenge, and mom, of course, isn’t alone. Scores of people over the age of 50 have yet to master or get knowledgeable about new AI capabilities and they may be out seeking a new job or trying to access a new role — not counting those re-entering the workforce after large gaps of time away from it. Many of those candidates risk being red-flagged by AI platforms for having large gaps in their CVs or not having experience with new technology applications. 

Michael Wright, managing partner of Taligence, a New York-based executive search firm, said that is partly tech’s fault. He said LLMs are not intelligent and are highly susceptible to bias. 

“Understanding context and nuance is taken for granted as humans, but we are exceptional at it,” said Wright. “LLMs (large language models) are incredibly bad at it because they only learn from the training set they were provided.”

He does give LLMs some credit. “LLMs are clever at tasks where there’s clarity in the datasets,” he said. “But giving them too much credit for ‘intelligence’ and believing the results they spit out without questioning is where the dark path leads.”

Related Article: When AI Discriminates, Who's to Blame?

AI in Hiring: A Work in Progress

Several recruiters, search consultants and job seekers have also hinted at bias toward younger workers in their databases. Naturally, systems often rely on data and algorithms that don’t accurately serve the over-50 crowd. This can result in biased outputs that reinforce age-related stereotypes and discrimination.

Erik Tromp, a co-founder of AI Commandos, a full-stack AI consultancy based in the Netherlands, said ageism bias is a growing phenomenon. He noted research from The Gerontologist that highlighted how AI naturally bolsters conversations about discriminatory practices such as sexism, fascism and classism, and that ageism needs to be critically scrutinized as the global population ages.

“Most of this research is based on AI FOMO (Fear of Missing Out) that will leave one behind her peers,” Tromp noted. “Amazon’s use of an AI model to hire males for engineering positions is a clear indicator that AI algorithms can be used the wrong way. Certainly, you can just feed the algorithms data that contains demographic material, and they will at the very least look at whether those hold predictive power. Not introducing these factors and carefully auditing your input data to not have proxies thereof is the way forward.”

One would think that because AI permeates the current work environment, there would be other research that validates or disputes it. So, consider the irony that a study initiated by Hilary Arksey and Lisa O’Malley in 2002 is often used as a framework to discuss how age-related bias in AI might be systematically reviewed. Their scoping review methodology maps key concepts, types of evidence and gaps in research in areas where existing research is complex or not well-defined.

(For those who are not into science, here’s a translation: Arksey and O’Malley didn’t conduct a study on age-related bias, but if someone were to study it, they would use their format. If someone out there can explain this bias, message me!) 

Related Article: NYC's New AI Bias Law Is in Effect. Here's What it Entails

Does Fairness Enter the AI Picture?

Thankfully, some regions are introducing regulations and accountability to address fairness in AI hiring systems. In European countries, GDPR includes provisions that grant individuals the right to receive explanations for algorithmic decisions. On our side of the pond, New York City introduced an “AI hiring law” in 2024 that requires companies to audit AI systems for bias and notify candidates when AI is used in the hiring process.

And then there is just good old-fashioned fairness. Now that AI touches everything from job search to photo editing, accounting for fairness has to be integrated up-front. In 2016, researchers conducted a study based solely on facial age of Wikipedia photos, and realized it is very possible for AI to reinforce inequities and discriminatory practices if algorithms are not coded properly. 

So it begs the question — are we coding algorithms to be fair? Can candidates outsmart AI to bypass fairness and gain some sort of advantage? Even with safeguards in place, Tromp suggested that algorithms could pick up on a job candidate’s attempts to appear younger than they are.

“I have actually used myself as a guinea pig, as my own resumé shows my last three roles for brevity. AI can tell that my graduation year and job start dates do not line up, but drawing actual conclusions from that is a different story, unless I actively ask for it," he said. "AI can help in signaling, but should not be the definitive judge.”

However, Tromp shared how ageism can be “deprogrammed” into the algorithms themselves, particularly if data sets suggest other traits are more important.

“We developed an initial algorithmic matching tool for a large company focused on blue-collar work. Our models initially showed a strong correlation between work performance and satisfaction, as well as a correlation between age and commute distance. We removed those traits from the data that the model was trained on to obtain a version that would only judge on skills and personality and nothing more.”

To be sure, any exclusion of older adults from AI systems raises significant ethical concerns. It challenges the fairness and inclusivity of AI technologies and highlights the need for responsible AI development that considers all age groups. And if not addressed soon, companies could face age discrimination lawsuits — all it takes is one legal precedent. 

Related Article: AWS's Diya Wynn: Embed Responsible AI Into How We Work

How to Mitigate Risk

Whether you're a programmer, an employer, a talent acquisition professional or C-suite leader, here are some things you can do to mitigate the risk of AI bias.

  • Conduct thorough reviews of code to ensure that datasets and intelligence commands represent older adults. Sure, reviews cost time and money — but litigation costs more.
  • Involve multiple generations in the development of algorithms. There are a lot of wicked smart people out there — and they’re happy and willing to help.
  • Create guidelines that represent all ages in coding. Prioritizing inclusivity and fairness in AI can go a long way in finding a more diverse age range, not to mention a wider candidate pool.
  • Establish mandates ensuring AI is accessible to learn more about older professionals.
Learning Opportunities

I am not an attorney. But I am a job seeker who has seen the pitfalls of AI. And as much as I love my mom, I don’t want to be left in the dust because technology — and my potential employer — passed me by.

About the Author
Michael Shmarak

With more than 25 years of experience in public relations, corporate communications and executive/thought leadership positioning, Michael has represented some of the nation’s most recognized brands in professional services, real estate, hospitality, retail, technology and investment banking. An expert in public apologies, he currently serves as adjunct lecturer at Northwestern University, where he teaches Introduction to Public Relations at Northwestern’s School of Professional Studies. Connect with Michael Shmarak:

Main image: Andy Kelly | unsplash
Featured Research