T-Rex toy typing on a mechanical keyboard, propped up on a book
Feature

Making AI Hiring Tools Work for Everyone

4 minute read
Nick Kolakowski avatar
By
SAVED
As AI enters the hiring process, organizations must balance efficiency with fairness by auditing data, reviewing outcomes and keeping humans in the loop.

Overworked human resources (HR) specialists are still figuring out to what extent artificial intelligence (AI) tools automate the most time-consuming parts of their jobs, including candidate selection and first-round interviewing. Only 28% of HR leaders were implementing generative AI tools by October 2024, according to SHRM, although later studies have indicated more widespread AI adoption by individual HR employees. 

Thanks to pressure from the market and senior leadership, AI adoption within HR will no doubt increase over the next few years — which means these organizations need to take conscious steps to reduce bias in AI tools, or else their companies’ hiring processes risk being stuck in a digital echo chamber.

The prospect of biased AI hiring tools is real. In one notorious example from 2014, Amazon trained an automated recruiting system on a decade of resumes submitted predominantly by men, leading it to develop a bias against women candidates. That’s just one example of the ways AI distorts the hiring process. HR leaders interested in using AI in their hiring process must figure out how to build a system based on trust, transparency and fairness — but how?

Getting Proactive with HR AI Tools and Training Data

HR leaders who want inclusive AI tools should design toward that goal from day one. Systems that reduce bias do the following, according to Nicole Fougere, creator of SeekSuite, a modular, bias-aware, AI recruitment platform:

  • Audit training data: These systems evaluate which traits have been historically rewarded and why.
  • Predefine success signals based on the role’s requirements: This includes must-have technical skills, performance indicators and values tied to outcomes, which become the model’s foundation.
  • Make decisions transparent: This helps hiring teams see and challenge patterns that don’t make sense.

“In practice, that means HR teams need to predefine success signals before sourcing,” Fougere added. 

How to Prepare Your Organization for an AI Hiring System

On a broader level, implementing successful AI hiring tools means making some organizational and workflow changes. Here’s some tips from Doug Stephen, president of CGS Immersive, who led the development of Cicero, an immersive AI roleplay platform designed to fairly address workforce soft skills gaps:

  • Identify stakeholders: Diverse stakeholders from across the company should help define success metrics and feedback loops for any proposed AI HR system.
  • Establish a bias review team: Such a team might include team members with expertise in HR, DEI and more. This provides multiple perspectives if an AI outcome needs to be reviewed.
  • Document early and often: Document where an AI’s data comes from, how models are trained and potential pitfalls. For example, if certain groups are underrepresented in the data, that should be remedied. 
  • Humans in the loop: Inserting human review into any HR AI process is critical, especially when it comes to important decisions, such as having a human HR specialist assess and sign off on any candidates an AI system finds before moving to the candidate stage. 
  • Test across user segments: “Evaluate system performance across different demographics before full deployment,” Stephen suggested. “Does the feedback change depending on who’s being assessed? Is the tone fair and constructive across the board?”
  • Feedback mechanisms for learners: You may want to design a system so a candidate can give feedback if something about the application process feels weird or unfair. This helps build trust.

Independent audits of companies’ use of AI hiring tools are likewise critical — and may soon become a legal requirement, Fougere said. Other jurisdictions may follow the example of New York City, which passed a law in 2023 requiring employers to conduct third-party audits of any AI-based hiring or promotion processes, and then publish those findings.

However that legal situation pans out, companies should consider conducting their own internal audits with their own employees. “Regular audits need to be more than a compliance checkbox — they must be structured, ongoing processes that evaluate both the data and the outcomes of AI-driven systems,” Stephen said. “One key is to embed audits throughout the lifecycle of the tool, not just at launch. This means reviewing how models behave as new data comes in, and validating that performance remains equitable across different user groups.”

While many HR specialists focus on the risks of AI during hiring and onboarding, it’s also worth examining other aspects of the employee experience, such as training and evaluations, where biases can also creep in.

“For training systems — especially those focused on soft skills — this means assessing how feedback varies across gender, race, age, neurodiversity and even regional dialects or communication styles,” Stephen added. “Bias can creep in through something as subtle as tone analysis or facial recognition, so audits need to examine the outputs and the assumptions behind them.”

What Questions Should You Ask Vendors of AI Hiring Tools?

The problem with many AI tools is they operate in a “black box,” with no means of explaining how they arrived at a particular conclusion. HR leaders should remember that when selecting tools, and look for ones that show how an AI model arrived at a particular hiring decision. For example, a transparent AI tool might state that it rejected a particular candidate for not listing three must-have skills on their resume, which would allow an HR staffer to determine that the move was a fair one. 

Consider asking AI vendors questions like these: 

  • Provide the demographic composition of your training data.
  • Which specific fairness metrics do you test for?
  • Is your model a “black box,” or do you offer explainability? 
  • In light of the Workday case, it’s also worth asking about contractual provisions regarding liability and indemnification for discrimination claims as this area evolves.
Learning Opportunities

Although there’s a lot of pressure on HR leaders to implement AI solutions as fast as possible, a measured approach helps companies avoid reputational and legal risks.

“Fairness in artificial intelligence goes beyond simply good intentions,” said Kelsey Szamet of Kingsley Szamet Employment Lawyers. “It requires setting up outright policies and accountability mechanisms  as well as a dedication to ongoing evaluation and improvement that puts employees first. Companies that look the other way on these points experience not only possible future legal exposures, but reputation harm as well as forfeited opportunities to develop a truly inclusive workforce.”

Editor's Note: Some other considerations to keep in mind when introducing AI into HR processes:

About the Author
Nick Kolakowski

Nick's career in tech journalism started as a freelancer for The Washington Post, covering gadgets and consumer tech. Since then, he's been a reporter for B2B and B2C tech publications such as eWeek, CIOInsight and Baseline, as well as an editor at Slashdot.org and Dice.com. Connect with Nick Kolakowski:

Main image: Matteo Discardi | unsplash
Featured Research