a neon no vacancy sign lit up at night
Feature

Why AI Hiring Discrimination Lawsuits Are About to Explode

7 minute read
Virginia Backaitis avatar
By
SAVED
AI is reshaping hiring — and the courtroom. Job seekers are suing over biased screening tools, and experts say a wave of lawsuits is just beginning.

Something's happening in employment law that should make every HR leader wary. Arshon Harper applied to around 150 IT positions at Sirius XM Radio and got rejected by all of them. He claims the company's AI screening system (iCIMS) downgraded his applications based on race — possibly using zip code and educational institutions as proxy data — factors that have no relevance to his ability to perform the jobs. So he's suing in federal court. And his case might not be an outlier.

Why?

Right now, job seekers are submitting hundreds of applications per month and too many never hear back. Many believe algorithms are ghosting them before a human ever sees their resume. And when people apply for 150 jobs and get 150 rejections, they start looking for patterns. When those patterns line up with their age, race or disability status, they start looking for lawyers.

Table of Contents

The AI Hiring Discrimination Lawsuits Are Piling Up

Harper isn't alone in taking on AI hiring tools. Here's what's already filed:                 

Derek Mobley took a different approach. The 40-year-old Black jobseeker had applied to between 80 and 100 jobs and got rejected from all of them. But here's a twist: he didn't sue the employers who rejected him. He went after the tech vendor, Workday, arguing that its AI screening tools discriminated against him based on race, age and disability.

Mobley's case is testing a groundbreaking legal strategy; namely, that tech companies might qualify as "employment agencies" under Title VII of the Civil Rights Act, which would make them directly liable for discrimination. The case moved forward in May when Judge Rita Lin of the U.S. District Court for the Northern District of California granted preliminary certification under the Age Discrimination in Employment Act (ADEA), allowing the lawsuit to advance as a nationwide collective action. If Mobley and party win, the case changes who bears the legal risk, the employer or the software vendor that provides candidate selection tools.

Aon Consulting is facing trouble with the ACLU over three of its hiring tools. The complaint, filed with the FTC in May 2024, argues that Aon tools Adept15, Vid-assess AI and ACLU gridChallenge are discriminatory against people with disabilities and certain racial groups. The ACLU also challenged the claim that Aon's tools were "bias-free." In other words, just because vendors say their AI is fair doesn't make it so.

Intuit and HireVue are dealing with EEOC charges filed in March of 2025. A deaf indigenous applicant claims that the HireVue automated video software used by Intuit lacked proper captioning. When she requested CART accommodation, or real-time captioning support, the company allegedly denied it, which may have tanked her results.

This case raises a new question: Do accessibility and accommodation requirements apply to AI tools the same way they apply to human interviewers?

Why the Flood Is Coming

AI has turned job seeking into a numbers game. Job seekers used to send out around 10-20 carefully crafted applications. Now they're blasting out 100, 200 and sometimes 300 applications per week using automated tools.

Employers turned to AI as a result, explained Karen Odash, an employment attorney at law firm Fisher Phillips: "These tools became necessary to find good matches because employers are flooded with resumes." She added that, "AI tools may lead to discriminatory hiring practices, even if inadvertent."

The combination of automated job applications and employer use of AI to screen massive application volumes creates a perfect storm. When someone applies to hundreds of jobs and gets rejected by automated systems every single time, they start asking questions.Did AI downgrade them because they went to a historically Black college? Because their zip code is in a predominantly minority neighborhood? Because they're over 40 and have a 20-year work history that the algorithm interprets as "overqualified"? Because they need accommodation for a disability?

Some of those suspicions will be wrong. But experience shows that others won't be. Automated tools have selected candidates based on irrelevant factors like playing high school lacrosse or being named Jared. They've eliminated qualified candidates with disabilities because the AI interpreted traits like low "optimism scores" as red flags.

A University of Washington study published in 2024 provided evidence of systematic bias in AI tools. Researchers tested three state-of-the-art large language models from Salesforce, Mistral AI and Contextual AI against over 500 job listings. They varied 120 first names associated with white and Black men and women across their resumes, generating more than three million comparisons.

The results were disturbing. The systems favored white-associated names 85% of the time versus Black-associated names only 9% of the time (6% were other). Male-associated names were preferred 52% of the time versus female-associated names just 11% of the time. But the intersectional findings revealed something even more disturbing: the systems never preferred Black male-associated names over white male-associated names. 

"We found this really unique harm against Black men that wasn't necessarily visible from just looking at race or gender in isolation. Intersectionality is a protected attribute in California right now, but looking at multidimensional combinations of identities is incredibly important to ensure the fairness of an AI system," said lead researcher Kyra Wilson.

This research matters because it studied open-source models at massive scale, unlike previous small studies. And it shows that bias patterns aren't just additive. The harm to Black men specifically creates disparities that wouldn't show up if employers only tested for race bias or gender bias separately.

Who's Liable When AI Hiring Tools Discriminate: Employers or Vendors?

When an AI tool discriminates, who takes the fall? The short answer: probably both.

Guy Brenner, labor and employment partner at Proskauer Rose, put it bluntly: "There's no defense saying that 'AI did it' (breaking a law). If AI did it, it's the same as the employer did it," he told Reworked.

Employers can't outsource liability.

Federal anti-discrimination laws like Title VII and the ADA apply regardless whether a human or an algorithm makes the decision. The EEOC has been clear. If your vendor's tool screens out protected classes at disproportionate rates, you're still liable. The EEOC stated that if tech vendors regularly procure prospective applicants or employees for an employer, they may qualify as employment agencies under federal law. This means that vendors could face direct liability for discrimination, not just as a party to. Employers might then sue for indemnification.

The Aon case shows that companies selling biased AI tools could face FTC complaints for deceptive marketing practices. If a vendor claims their tool is "bias-free" or "eliminates discrimination," and that turns out to be false, they could face regulatory action.

Learning Opportunities

The reality is that employers bear most of the immediate legal risk. Courts and agencies expect companies to audit their vendors, test for bias and maintain human oversight. "We trusted the vendor" isn't a defense. It's an admission that you didn't do due diligence, said the attorneys we spoke to.

But the legal landscape is shifting. 

"In the past HR could pick a tool and decide how it works. Now there's an application of law that was never intended to apply in context of an AI tool," Brenner told Reworked. Moreover, there isn't much case law, yet, to clarify whether tech intermediaries qualify as employment agencies because the question hasn't come up until recently. 

The Black Box Problem

Part of what makes these cases so complicated is that many AI systems still operate as "black boxes." The algorithms are so complex that even the people who designed them can't always explain why they made a specific decision. This is where Explainable AI (XAI) becomes critical.

Explainable AI is "a set of processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms," according to IBM. Instead of just getting a score or recommendation, explainable AI shows which factors the algorithm considered and how much weight each factor received.

Brenner emphasized this point: "Employers need to understand how the AI they are using works. It should be transparent." Without that transparency, employers can't audit for bias, can't fix problems and can't defend themselves in court.

The problem is that many employers don't actually know how their AI tools work. They buy software from vendors and trust that it's doing what it's supposed to do. But "the new use of AI can result in unintended legal consequences," said Brenner.

The Bottom Line

AI tools promise to handle massive application volumes that humans can't process. That promise is real. But when those tools learn from historical data that reflect decades of discrimination, they don't eliminate bias. They automate it and scale it up.

It’s plain to see that more lawsuits are coming because the conditions are perfect for them, according to attorneys Brenner and Odash. Millions of job seekers are sending out hundreds of applications and getting rejected by algorithms they can't see or challenge. Some percentage of those rejections will be discriminatory, whether intentionally or not. And as awareness grows about how these systems work (or don't work), and as people become more conscious of their digital rights, more applicants will fight back.

For employers, the question isn't whether to use AI in hiring, according to the experts we spoke to.  The question is how to use it responsibly, with transparency and regular audits to catch problems proactively. This is because once a class action gets verified and discovery reveals that your AI tool screened out older workers or racial minorities, the damage is done. The settlements can be huge. The reputational hit can be worse.

The wave is building. The smart move is to get ahead of it.

Editor's Note: What else should you know about AI's use in hiring?

About the Author
Virginia Backaitis

Virginia Backaitis is seasoned journalist who has covered the workplace since 2008 and technology since 2002. She has written for publications such as The New York Post, Seeking Alpha, The Herald Sun, CMSWire, NewsBreak, RealClear Markets, RealClear Education, Digitizing Polaris, and Reworked among others. Connect with Virginia Backaitis:

Main image: mike cox | unsplash
Featured Research