hotel sign with "No Vacancy" — the word Vacancy is lit up
Feature

AI Is Here to Help. Recruiters Aren't So Sure

5 minute read
Virginia Backaitis avatar
By
SAVED
A number of recruiters continue to manually review and respond to resumes, in spite of the proliferation of available tools for the task.

While some HR tech vendors point to surveys that claim that by 2024 AI will be making hiring decisions without any human intervention, the general consensus among recruiters who review resumes, applications and make job offers on behalf of employers is “that won't happen any time soon, if at all.” Some say there has to be a “human in human resources,” while others note that generative AI applications are “less capable than humans in many real-world scenarios,” as Open-AI points out on its website.

AI Is Here to Help. Recruiters Say 'Nope'

Numerous recruiters say that they don’t use AI, let alone much of the technology that’s been available for years and is supposedly tried and true. “The software doesn’t allow for fairness,” said Steve Levy, manager of technical recruiting at Zip Co. and co-founder of the Association of Talent Acquisition Professionals. He reads every resume he receives, systematically, from the bottom up. He’s not alone, though methods among recruiters vary.

"We review them (inbound job applications and resumes) all,” said Amy Miller, who has worked, and still works, at FAANG companies. “We have no mechanisms for reviewing that’s not manual. A recruiter could do a basic boolean string against the list of applicants but again they have to actually go in and create that string / double check for false positives/negatives." 

Eleanor Drage and Kerry Mackereth, professors at the University of Cambridge's Centre for Gender Studies, published a paper in Philosophy and Technology that documents that AI-powered recruitment tools, which claim to eliminate biases and promote diversity in the hiring process, may actually pose risks and reinforce existing biases. One of the claims made about these tools is that they hide names, genders and other identifiers of candidates to prevent biases in the initial screening process. However, the researchers argued that these tools can still promote uniformity in hiring by reproducing cultural biases associated with the "ideal candidate," historically perceived as white or European males. This means that even without explicit identification, the tools may implicitly favor candidates who possess characteristics similar to those of the dominant group.

Related Article: Use Generative AI to Write Performance Reviews? Not So Fast

You Are What You Eat

Additionally, some of these AI tools rank candidates based on resume scans, online assessments, and analysis of speech and facial expressions. The concern raised by the researchers is that these tools rely on past company data, which may reflect existing biases in the workforce. As a result, the tools could inadvertently perpetuate the hiring of candidates who closely resemble current employees, thereby limiting the potential for diversity within the organization.

Prof. Rangita de Silva de Alwis, founder of the AI & Implicit Bias Lab at the University of Pennsylvania Carey Law School, examined employment platforms from the perspective of 87 black students and professionals. She also conducted an analysis of 360 online professional profiles to gain insights into how AI-powered platforms contribute to the replication and reinforcement of anti-Black bias. Among the participants, 40% acknowledged encountering suggestions that were influenced by their identities rather than their qualifications. Furthermore, 30% of respondents reported receiving job alerts that were below their current skill level.

Amazon, which had built an AI recruiting tool five years ago, found that its tool didn’t like women. That’s because the data used to teach the tool what a good hire looks like was trained with data belonging to successful Amazon employees, most of whom were men. While the company, which employs some of the best minds in AI, might have been able to modify the algorithm to make the tool gender neutral, they feared it might teach itself to discount job candidates based on some other criteria. So, instead Amazon Web Services asks applicants to complete job assessments, which indicate whether they have the experience, aptitude and attitude required for the role. Those individuals who do well are invited into the interview process during which a recruiter is assigned to them to help the recruiting process succeed.

Related Article: When AI Discriminates, Who's to Blame?

Where AI Might Help

All of this said, AI might be useful at some stages within the hiring process, McKinsey partner Bryan Hancock told Reworked.

Generative AI could be part of the initial resume screen to help a recruiter whittle a pile of 10,000 resumes to 1000 “using the right risk rails,” he said. Hancock specified that AI could, for example, screen out applicants who don’t have a specific license that’s required for the job, or to separate individuals who don’t know Excel or PowerPoint from those who do. “It could also predict what kind of role a person who had those skills might be qualified for.”

Another implementation of generative AI could come via rewriting job requirements. “By utilizing generative technology, the system can extract the essential skills necessary for success in a given role. Of course, it remains essential for managers to review and refine the final output. They will serve as the human touchpoint to ensure the job requirements are satisfactory. Nevertheless, the integration of generative AI has the potential to greatly enhance efficiency and overall quality,” said Hancock. Whether that will be quicker than the cut-and-paste of old job descriptions that many managers now use remains to be seen. The real win would be having a robot of some sort to follow an incumbent of a role every day for 12 months to see what they actually do and what skills they use. A better job description could be written that way.

Hancock also said generative AI could be used for candidate personalization. Presently, when an organization receives a staggering number of applications, many jobseekers get ghosted. Many others receive form letters notifying them of their status. Hancock said that with generative AI “the scope for personalization expands significantly. It becomes possible to incorporate highly specific details about the candidate, the job itself, and even suggest alternative positions if there's a mismatch.”

Related Article: AI Is Coming for Talent Acquisition and Recruiters Are Ready

And Then There's the Law

Of course, this is only true if job applicants and local laws consent to AI’s use in the application process. A survey conducted by Pew Research in April found that 66% of respondents said they would not want to apply for a job where AI was used to make hiring decisions. Seventy-one percent said they would be totally opposed to the technology making a final hiring decision, and 41% were against it being used to review job applications.

Some law makers don’t trust AI with personal data as well. The Equal Employment Opportunity Commission (EEOC) recently introduced an initiative focused on "algorithmic fairness" and has released comprehensive guidelines for employers to ensure compliance with the regulations of the Americans with Disabilities Act (ADA).

Learning Opportunities

Meanwhile, the City of New York is the first in the United States to pass a law around “Automated Employment Decision Tools” (AI). To comply with its guidelines, employers who use AI for hiring purposes must put their tool through an annual bias audit, ensuring its fairness. The employer must openly share a summary of this audit with the public, demonstrating transparency. Additionally, applicants and employees who undergo screening by the tool must receive specific notices from the employer, ensuring they are well-informed about the process.

How employers react to this has yet to be seen; they will have to decide whether the benefit of leveraging AI in hiring exceeds the costs of living under the threat of lawsuits.

About the Author
Virginia Backaitis

Virginia Backaitis is seasoned journalist who has covered the workplace since 2008 and technology since 2002. She has written for publications such as The New York Post, Seeking Alpha, The Herald Sun, CMSWire, NewsBreak, RealClear Markets, RealClear Education, Digitizing Polaris, and Reworked among others. Connect with Virginia Backaitis:

Main image: Krys Amon | unsplash
Featured Research