The Complicated Relationship Between AI and Human Resource Management
Should artificial intelligence and machine learning (AI/ML) algorithms influence how people are hired, paid, trained, evaluated and otherwise managed at work? I recently discussed this question with employment lawyers from six different countries and research scientists supporting AI/ML technology solutions that analyze data from over 100 million employees and job candidates annually. Below are four key insights gained from the conversations.
Insight No. 1: Asking if AI/ML Is Ethical Is Like Asking if Math Is Ethical
The 1975 book "Artificial Intelligence" noted that “if you asked physicists to offer a definition of their field, you would find substantial agreement. It is doubtful you would find such agreement if you asked scientists studying artificial intelligence.” Fifty years later, this statement is still true. The lack of a clear definition of AI/ML is one of its problems. People are wary of companies using methods they do not understand to make decisions that affect their lives, such as whether they get a job. It is hard to understand something that does not have a clear definition. To make matters worse, many companies market AI/ML solutions as being endowed with mysterious, almost magical properties.
Because AI has been presented as some type of futuristic wizardry, members of the public are understandably anxious about its use. This is leading to regulations about use of AI that are well-intended but unclear. These regulations are difficult to follow due to their ambiguity and could prevent society from benefiting from what is a highly effective and valuable mathematical tool for companies, employees and candidates when applied in the right way.
The book "Decoding Talent" discusses the use of AI/ML in human resources and defines it as “various types of advanced statistical analysis software that is especially good at processing complex and unstructured information.” AI/ML is neither artificial nor intelligent. It is just a complex form of applied mathematics that most people do not fully understand. There are countless examples of people trusting their wellbeing to technology that uses mathematical techniques they do not understand: smart phones, medical devices, online shopping, airplanes, elevators, the list is endless. People may not trust AI/ML because it sounds scary, but they are quite willing to trust complex math.
One wonders if all the concern about AI/ML would have happened if it was called something more boring but descriptive like “iterative pattern recognition algorithms.”
Society is unlikely to reach a common agreement on the definition of AI/ML any time soon. But what we can do is stop talking about AI/ML as though it is a mysterious method from the realms of science fiction. It is just math.
Related Article: Why Enterprise AI Needs Human Intervention
Insight No. 2: AI/ML Isn't Perfect, But Is Often Far Better Than the Alternative
Criticisms of AI/ML highlight the imperfections of using mathematical algorithms to predict, measure or manage human behavior. Examples include algorithms that create hiring decisions that are biased against people from certain demographic groups or that monitor employees using data in a manner that is felt to be inappropriate or a violation of privacy. What these criticisms often fail to recognize is the alternative solutions that are used to solve these challenges may be far worse.
For example, it is true AI/ML systems can display bias in hiring if they are not appropriate designed, but humans also show considerable bias when making the same decisions. AI/ML algorithms can be proactively analyzed and designed to ensure they do not promote biased hiring. This cannot be done for hiring decisions that are made by humans. The question we should ask is not “are AI/ML applications effective, fair, and unbiased?” The question we should ask is “are AI/ML applications more effective, fair, and unbiased than alternative methods we might realistically use?” The answer to this question is often “yes,” provided AI/ML algorithms are appropriately designed, validated and monitored.
Related Article: Interrupting Biases Beneath the Surface
Insight No. 3: AI/ML Solutions Can Cause Harm to Human Happiness and Well-Being
The book "Decoding Talent" observes that “many AI vendors talk about ‘trusted AI’ and ‘bias-free AI,’ but … any tool that is in any way opaque about how it operates — which a lot of AI technology is — can never be fully trusted or left to its own devices. Nor can we ever assume it is bias-free, even if an analysis at one point showed it was.” It is possible to create AI/ML algorithms whose use violates legal requirements guiding hiring decisions and treatment of employees. Because AI/ML algorithms are so complex, organizations may not even realize they are acting in an illegal manner.
The use of AI/ML in HR also raises concerns about people’s perception of procedural justice and fair treatment. People want to understand how decisions are made that affect their employment, pay and career development. Telling someone that “a machine decided you were not a good fit for the job” may not be perceived as being fair by some people. On the other hand, research suggests that in some employment contexts people are more trusting of AI/ML hiring algorithms than human judgement.
The challenge facing companies is how to benefit from AI/ML while managing risks related to bias and fairness. Companies have addressed this challenge in several ways. First, create and publish a set of ethical guidelines governing the use of AI/ML techniques within the company. Examples include AI/ML guidelines published by UNESCO, SIOP, Modern Hire and SAP. Second, establish processes to review and analyze applications of AI/ML to ensure they do not violate ethical or legal guidelines. This is one of the more challenging steps because it requires having the technical knowledge to determine whether AI/ML algorithms are predictively valid and unbiased.
Learning Opportunities
Third, be transparent to candidates and employees regarding what data is used by AI/ML applications, how it is used, and what steps are taken to ensure it is being used appropriately. Some companies also enable candidate and employees to opt out of having their data included in AI/ML analysis if they feel it is inappropriate. Many vendors creating AI/ML solutions for HR follow these principles, but not all do, so be cautious.
Related Article: Artificial Intelligence in HR Remains a Work in Progress
Insight No. 4: The Most Socially Harmful Uses of AI/ML Are Not in HR
Taking action to ensure ethical use of AI/ML in HR is worthy of ongoing effort and resources. That said, applications of AI/ML in other areas of society arguably have a much greater negative impact on people’s well-being but receive far less regulatory attention than those in HR. Examples include using AI/ML to guide credit ratings, monitor security, set insurance policies and generate advertising revenue. A particularly striking illustration is using AI/ML in social media applications to capture user attention. AI/ML applications in social media have been tied to increasing rates of stress, loneliness and depression as well as social divisiveness and civil unrest. It appears that the mathematical techniques used to improve HR decisions through “artificial intelligence” can also be used to generate social media ad revenue through creating “artificial” fear, anxiety and anger.
One of the major differences between use of AI/ML in HR versus areas such as social media is the presence of established laws and regulatory bodies dating back to the early 20th century (if not further). These create societal expectations and legal pressure to ensure applications of AI/ML in HR do not harm the wellbeing of people. Similar laws, regulatory bodies and social expectations have not been established regarding applications of AI/ML to many other areas of society. This is particularly true for social media, since it did not exist in any significant form prior to the 21st century.
We should not decrease our focus on ensuring AI/ML is applied ethically to HR. But it seems like we should be focusing far more attention on other applications of AI/ML than we currently are.
Guiding Our Future by Learning From Our Past
The advent of AI/ML enabled technology is transforming many aspect of our lives and societies. These solutions are improving our ability to accomplish things we value. However, they also pose significant, frequently unintended and often highly complicated risks. Having some level of concern about the growing use of AI/ML is healthy. The problem is few people understand how AI/ML works at a detailed enough technical level to critically evaluate whether AI/ML solutions are behaving in an ethically appropriate manner. We are faced with the challenge of how to ensure this sophisticated technology is being used appropriately without overly restricting its use.
This is not unlike the situation societies faced at the turn of 20th century, when scientific and technological advances were radically transforming the creation and processing of food products and pharmaceutical drugs. We now live in a world where billions of people readily ingest medicine and food created using biochemical and molecular biology concepts they don't understand. People do this because they entrust their safety to a highly developed system for ensuring food and drug safety that was established in the early 1900s.
Where we are in the 21st century regarding AI/ML algorithms is not unlike where society was with using scientifically designed medicine and food in the early 20th century. The response we need might be similar to the one taken over 100 years ago when people were faced with a valuable, powerful yet potentially dangerous new form of technology. It is likely to be a long, complicated journey. Where it takes us will depend far more on utilizing the organic intelligence of humans than the artificial intelligence of machines.
Learn how you can join our contributor community.