front side of a court house
Feature

AI's Use in Talent Acquisition Raises Questions of Responsibility

3 minute read
Mark Feffer avatar
By
SAVED
HR leaders need to pay attention to an active lawsuit that could mean big changes to where AI fits in recruiting strategies.

Expect more attention on your use of AI in talent acquisition from all sides: regulators, legislators, candidates and employees. 

While there haven’t yet been many cases examining an employer’s potential machine-generated bias in the recruiting process, the issue is gathering momentum — and prompting organizations to action.  

History in the Making

One of the first cases in this area is proceeding right now — Mobley vs. Workday, Inc — and it’s certainly worth paying attention to for anyone using or planning to use AI-driven algorithms in their recruiting process.

The proceeding is based on a lawsuit filed by Derek Mobley, an African-American over the age of 40 who suffers from anxiety and depression. Mobley says he’s applied for more than 100 positions since 2019 using Workday’s platform and was turned down for each one of them. He says the AI capabilities built into Workday’s talent acquisition suite have a built-in bias when it comes to race, age and disability.

Workday maintains there’s nothing biased about its software. 

In January, a U.S. District Court judge dismissed the case because she didn’t believe Mobley had backed up his allegations that Workday was operating as an employment agency, which would be required for Title VII to apply. Last week, Mobley filed an amended complaint in an attempt to address the deficiencies.

The revised complaint argues that employers essentially delegate to Workday their authority to make hiring decisions. To do that, the platform mines job-application data to detect patterns that companies can consider. 

“Because there are no guardrails to regulate Workday’s conduct, the algorithmic decision-making tools it utilizes to screen out applicants provide a ready mechanism for discrimination,” the complaint read. 

In this particular case, Mobley said he was rejected for positions even though he met or exceeded their requirements, and often within hours of his application.

Related Article: When AI Discriminates, Who's to Blame?

New Technology, New Laws, More Litigation

Mobley’s isn’t the first case to highlight AI’s role in talent acquisition. In 2023, the online tutoring company iTutorGroup settled an EEOC complaint alleging the company violated the Age Discrimination in Employment Act (ADEA) by developing software that rejected candidates based on age (55 for women, 60 for men). More than 200 applicants were excluded by the software, according to the complaint.

The issue was uncovered by a candidate whose application was immediately spurned when she included her real birthday. After submitting an identical application with a younger age the following day, she was offered an interview. 

Besides paying rejected applicants a total of $365,000, iTutorGroup agreed to provide training on the ADEA, Title VII and other federal laws, review and revise its anti-discrimination policies, and develop a complaint process for candidates and employees.

HR professionals and attorneys expect to see more cases like these. That’s because beyond the widening use of AI in talent acquisition, more agencies and more legislatures are exploring regulations intended to protect privacy and job seekers. In July 2023, for example, New York City passed the first law regulating the use of AI in hiring. The law requires annual audits of AI-based recruiting tools to ensure they’re not resulting in discriminatory decisions.

According to SHRM, a growing number of businesses are incorporating AI into their HR activities. About a third of the HR professionals who apply AI to recruiting use it to review or screen resumes, automate searches or communicate with candidates, SHRM said. Other surveys estimate as many as 80% of American employers use AI at some point in their hiring process.

Related Article: We've Only Seen the Start of Regulations Around AI in Recruiting

The Forecast Calls for Fog

Of course, the EEOC has been paying attention to this growing AI-based recruiting trend, warning employers they can be held liable if their recruiting software is found to discriminate against some job seekers. But this is new ground, and only a few lawsuits on the matter have been filed so far.  

Still, what has been shown is that regulations often focus on employers, not technology. The preferred approach is to require transparency rather than regulate algorithms. This means that besides added compliance risk, employers must do most of the leg work. 

Under New York City’s law, any business hiring city residents must determine and publish “adverse impact ratios” to demonstrate how AI affects their hiring and employment decisions. And individual candidates can opt out of processes that include AI — and employers must comply.

It’s an area that’s been heating up. Illinois’ Artificial Intelligence Video Interview Act requires companies to disclose how their AI works, inform applicants if it’s going to be used, obtain their consent and take specific steps to protect privacy. Maryland has a similar law, and measures are being considered in California, New Jersey, Vermont and Washington, D.C.

Employers must struggle to follow these laws despite a number of basic questions that have yet to be answered. For one thing, attorneys worry about advising their clients when many aspects of AI haven’t been defined yet. Last year, the EEOC published guidance related to AI and discrimination, but labor and employment attorneys aren’t satisfied. 

“The EEOC guidance is quite broad and mentions software and other sorts of tools that frankly, employers have used for decades,” one told Bloomberg. Another said simple sorting or filtering when candidates are vetted might come under the EEOC’s definition of technology-originated bias. 

Learning Opportunities

Still, others believe the EEOC isn’t doing much more than examining issues it’s been concerned with for years. 

“That’s just plain old unlawful,” one said of the iTutorGroup case. “A, you don’t need a computer to do that. B, you don’t need a computer or artificial intelligence to tell you that that’s not okay.”

About the Author
Mark Feffer

Mark Feffer is the editor of WorkforceAI and an award winning HR journalist. He has been writing about Human Resources and technology since 2011 for outlets including TechTarget, HR Magazine, SHRM, Dice Insights, TLNT.com and TalentCulture, as well as Dow Jones, Bloomberg and Staffing Industry Analysts. He likes schnauzers, sailing and Kentucky-distilled beverages. Connect with Mark Feffer:

Main image: Melody Ayres-Griffiths | unsplash
Featured Research