isolated gavel
Feature

We've Only Seen the Start of Regulations Around AI in Recruiting

5 minute read
Virginia Backaitis avatar
By
SAVED
New York City may have been the first to introduce legislation regulating the use of automated employee decision tools, but it definitely won't be the last.

It’s too late to regulate AI in recruiting — that's according to at least one human resources analyst, who we’ll be kind enough not to name. “The cat is already out of the bag,” they wrote.

The context of the conversation was the passage of New York City Local Law 144, one of the United States’ pioneering laws regulating the use of automated employment decision tools (AEDTs) by employers and employment agencies in New York City. The law requires that these tools undergo a bias audit, provide publicly available information about the audit, and give specific notices to employees or job candidates before their use. The aim is to address concerns related to algorithmic bias and ensure transparency in hiring processes.

A bit dry? Perhaps. But it’s something that employers, everywhere, should think about.

The Dangers of Automated Employment Decision Tools

While compliance with the rule has been minimal so far, academics, experts, legislators and others are actively urging attention be paid to the grave dangers the current and future use of AEDTs could create. "The bottom line is that all of them are capable of causing harm," said Benjamin Roome, PhD, co-founder and artificial intelligence ethicist at Ethical Resolve, which helps companies develop their ethics capacity by implementing systems and processes that enable reliable ethical decision making across the organization.

He’s not alone.

“An algorithm that is used in all incoming applications at large companies could harm hundreds of thousands of applicants," said Hilke Schellmann, author of "The Algorithm: How AI Decides Who Gets Hired, Monitored, Promoted, and Fired and Why We Need to Fight Back Now."

A 2021 study by Harvard Business School professor Joseph Fuller found that automated decision software excludes more than 10 million workers from hiring discussions. And while vendors are not legally responsible for the exclusion, “ultimately, if the company (employer) using the system engages in unfair hiring practices they will be ethically and legally accountable,” said Roome.

It is hard to tell if employers who use automated employment decision tools are all that concerned. More on this later.

'Do You Prefer Baseball or Softball?'

Roughly 83% of employers, including 99% of Fortune 500 companies, use some form of automated tool as part of their hiring process, said Charlotte Burrows, chair of the U.S. Equal Employment Opportunity Commission.

How are AEDT tools being used? One example comes from L’Oreal’s use of Seedlink technology. It leveraged chatbot Mya to converse with 13,000 jobseekers. Among the things it asked:

  1. Tell us about a project that you worked on that failed. What did you learn from that project?
  2. Tell us about the project where you were working with the multi-cultural teams and what experience did you have?
  3. Tell us about a situation where you are convinced about your idea, but your seniors are not. How will you convince them?

An algorithm was then applied to jobseeker/bot conversations to identify “culture fits.” While L'Oreal saved large amounts of time (up to 40 minutes per application) and money, tools like this “can replicate institutional and historical biases” according to Miranda Bogen’s Harvard Business Review article, "All the Ways Hiring Algorithms Can Introduce Bias."

"Unfortunately, we found that most hiring algorithms will drift toward bias by default. While their potential to help reduce interpersonal bias shouldn’t be discounted, only tools that proactively tackle deeper disparities will offer any hope that predictive technology can help promote equity, rather than erode it," she concluded.

Schellmann said that some of the tools she experimented with drew inconsistent conclusions. Consider that while testing out one company’s AEDT she ignored the questions being asked, in English, and read an unrelated Wikipedia entry aloud in German as her answer. It’s disturbing that Schellmann scored higher than average. “I was expecting an error message,” she said.

Another tool Schellmann experimented with asked whether you prefer baseball or softball. The answer can probably predict gender, which hopefully wasn’t the goal. Besides “the job doesn’t have anything to do with sports, so why does it matter?” asked Schellmann. Still another tool that she looked at was a game in which you had to push the space bar quickly in order to score well. “Pushing that space bar had nothing to do with the role. And what if the person had a disability that kept them from performing well?’

“We should consider if tools like these should be part of the hiring process,” said Schellmann.

She isn’t the only one concerned. Some city, state and federal officials want proof that AEDC tools aren’t creating biases relative to sex, race and ethnicity. (Some would like to include age, but most regulations stop short of mandating it.)

“No one should be using these systems uncritically, as they are capable of causing serious harm,” said Roome.

Related Article: When AI Discriminates, Who's to Blame?

AI Regulation: A Chicken or Egg Argument

New York City’s Local 144 is among the first in the United States to prohibit employers and employment agencies from using an automated employment decision tool unless accompanied by a bias audit and the necessary notices. Some interpretations suggest this law only applies if a "machine," rather than a human, makes the final hiring decision. Opponents argue that the real issue lies in the potential harm caused when algorithms systematically exclude certain job applicants. Business groups, like BSA, to which Microsoft, SAP and Workday belong, claim NYC 144 is impractical because of the difficulty of regulating AI, a rapidly advancing field with uncertain implications.

That said, in New York City 391 employers were required to comply with NYC’s Local 144, but only Morgan Stanley, Pfizer, Cigna, Paramount and 14 others did. Only 13 posted transparency notices informing job-seekers of their rights. Some employers who chose not to comply claimed that they have set their AEDC tools aside; they say that they no longer use them. Others, many of whom belong to BSA, say that making AI undergo independent audits isn’t doable because AI is still in its early stages — there aren't proper rules or oversight bodies in place yet.

It's important to note that non-complying companies could be penalized for failure to disclose the required information in their public reports. This includes details about the algorithms they utilize, along with an "average score" that candidates of various races, ethnicities, genders and combinations thereof, are anticipated to receive from these algorithms, presented in the form of a score, classification or recommendation. In addition, these reports must contain the algorithms' "impact ratios," defined as the average score given by the algorithm to individuals within a specific category (such as Black male candidates) divided by the average score of individuals in the highest-scoring category.

Failure to comply with these requirements is supposed to result in penalties. The penalties for noncompliance are $375 for the first violation, $1,350 for the second violation and $1,500 for any subsequent violations. Each day an employer continues to use an algorithm that is not in compliance with the law will be considered a separate violation, as will the failure to provide adequate disclosure.

Learning Opportunities

While New York City is one of the first localities to have a law like Local 144, similar legislation is pending in four states:

  • New York (Proposed): Bill A00567 proposes that employers must conduct an annual analysis of disparate impact. This analysis can serve as evidence for the attorney general to initiate investigations.
  • New Jersey (Proposed): Bill A4909 proposes that businesses selling automated employment decision tools must (i) conduct bias testing within a year before selling the tool, (ii) offer annual bias auditing services to buyers at no extra charge, notify buyers about pending state legislation affecting the tool.
  • California (Proposed): Bill AB331 proposes that deployers of automated decision tools must carry out impact assessments for each tool used by the employer.
  • Massachusetts (Proposed): Bill H.1873 proposes that employers must undertake algorithmic impact assessments to assess potential discrimination risks posed by automated decision systems.

So to say that it’s too late to regulate AI in recruiting is ridiculous. In fact, we’re just starting.

About the Author
Virginia Backaitis

Virginia Backaitis is seasoned journalist who has covered the workplace since 2008 and technology since 2002. She has written for publications such as The New York Post, Seeking Alpha, The Herald Sun, CMSWire, NewsBreak, RealClear Markets, RealClear Education, Digitizing Polaris, and Reworked among others. Connect with Virginia Backaitis:

Main image: Wesley Tingey
Featured Research