black box, symbolizing AI's lack of transparency
Editorial

Is AI Baking-In Discrimination in Recruitment?

3 minute read
Adi Gaskell avatar
By
SAVED
As AI becomes more embedded in recruitment, the risks of bias and discrimination grow.

GDPR legislation requires any automated system to be able to explain how it came to the decisions it did, and its inability to do that is creating a legislative minefield for companies as they deploy AI at scale to perform a wide range of tasks, said AI governance expert Alexander Hanff at the recent ABBYY AI Summit. 

Nowhere is this more applicable than in recruitment. In the UK, the Race Relations Act outlawed discrimination on the basis of race in 1965, and the 2010 Equality Act pulled together a range of anti-discrimination legislation to ensure that recruitment is fair.

Automated Discrimination

Automating candidate screening raises the risk of baking-in discrimination into the way organizations find, screen and ultimately recruit candidates, according to research from the University of Melbourne.

The research highlights how most organizations use AI as part of their recruitment process to filter and rank applicants. Such applications of technology in recruitment have a long history of exacerbating discrimination, however. For instance, Amazon famously scrapped its AI-based recruitment tool after it was found to be heavily biased in favor of male coders.

The Australian research found that AI-based recruitment systems pose a number of very real risks of discrimination for workers, whether they're people with disabilities, older workers or women. What's more, it's a landscape largely unprotected by regulations, which are struggling to keep pace with the speed of change.

How Discrimination Gets Into AI

The researcher quizzed stakeholders, ranging from developers and recruiters to career coaches and AI experts, as well as went through sales and promotional literature of some of the leading AI recruitment vendors.

Figures vary, but some studies suggest up to 60% of us have faced some kind of discrimination during the recruitment process, and the Australian research highlights how AI systems makes this even worse for marginalized groups.

Discrimination creeps into these systems either through the data they're trained on or the way the systems are developed. It also emerges due to the way that the technology is used, such as if the application system isn't accessible.

Why AI Discrimination Is a Problem

A key part of the EU AI Act focuses on the so-called "explainability" of AI-based systems. In other words, there needs to be transparency around not only what decisions AI makes but how it makes them. Unfortunately, this remains some way off for many of the technologies being used today, so candidates are largely left in the dark about how or why they've been rejected.

The study also found that AI-based systems create structural barriers for those seeking work. While it may seem as though everyone has an internet connection, data suggests that around 3 million people in the UK lack internet connectivity. What's more, around 8.5 million are said to lack basic digital literacy skills, so even if they have access to these platforms, navigating them is another matter.

This isn’t just a UK or EU problem. A recent case in the U.S. District Court for the Northern District of California shows how AI poses serious legal risks for global companies. In Mobley v. Workday Inc., the plaintiff claims that an AI-powered hiring system unfairly screened out certain groups of applicants.

AI recruitment systems are already covered under the UK's 2010 Equality Act, and this is the case in most countries, where employers must ensure people are treated fairly and free from discrimination in the workplace.

If AI hiring platforms are shown to reinforce biases, especially against groups that have faced past discrimination, such as Black candidates, women or people with disabilities, employers could face major legal trouble.

Plugging the Gaps

That's not to say that the legal framework is watertight, however, and the Melbourne research identifies a number of improvements to the Australian laws so AI systems aren't exacerbating discrimination in the workplace.

For instance, the researcher suggests the law could default to an assumption of discrimination unless companies and developers actively prove that their recruitment systems don't discriminate. This would place the burden of proof on the employer rather than the candidates. This would shift the power imbalance back in the favor of the powerless, especially in the context of understanding systems that are often little more than black boxes” even to the organizations that use them.

The researcher also argues that there should be an explicit right to have explanations whenever AI technologies are part of the recruitment process. Meanwhile, training data should also clearly represent society, with guidelines offered to employers to help them comply with these requirements efficiently.

Learning Opportunities

In the meantime, these systems are widely used and may cause many people to be treated unfairly in the labor market. As AI becomes more embedded in recruitment, the risks of bias and discrimination grow. Without stronger regulation and greater transparency, these systems may quietly reinforce the very inequalities they promise to remove. Ensuring fairness means holding developers and employers accountable so candidates aren’t left guessing why they didn’t make the cut.

Editor's Note: Learn more about AI use in recruiting:

fa-solid fa-hand-paper Learn how you can join our contributor community.

About the Author
Adi Gaskell

I currently advise the European Institute of Innovation & Technology, am a researcher on the future of work for the University of East Anglia, and was a futurist for the sustainability innovation group Katerva, as well as mentoring startups through Startup Bootcamp. I have a weekly column on the future of work for Forbes, and my writing has appeared on the BBC and the Huffington Post, as well as for companies such as HCL, Salesforce, Adobe, Amazon and Alcatel-Lucent. Connect with Adi Gaskell:

Main image: Tommy Diner | unsplash
Featured Research