Artificial intelligence (AI) represents a whole new computing platform, and every enterprise software company will have to learn how to leverage it. But so will users of enterprise software, which includes human resources and learning and development (L&D) teams. What’s the best way to approach such a disruptive, even fear-generating, new paradigm?
Based on our research and discussions with dozens of large organizations, we can offer the following advice:
Decide What Problem You Want to Solve
Success with AI is all about focusing on the problem you want to solve. Vendors will soon be offering products that either have AI added in, built-in or built-on. In the short-term, HR teams can expect analytics-based HR tech to offer AI-based extensions, like automatic job descriptions and messages to candidates and even customized interview scripts. Further out, plan for conversations with vendors about how machine learning is extending the functionality of their HCM systems, recommending courses to individuals based on their job role, activities or specified skills.Down the line, truly second-generation platforms built on AI will offer AI-based recruitment platforms that match all of your internal data with millions of external data about pay and skills to optimize your talent management, among other uses.
While many of these new features will be valuable at the outset, where do you want to focus? One of our clients, a large financial firm, realized that their complex onboarding and compliance process was hurting business, so they are using a set of AI-enabled tools to radically streamline and personalize this process. Once it’s working, they plan on extending it to all employee transitions, so it becomes a platform for many more employee use-cases.
Related Article: ChatGPT Opens the Floodgates of AI in HR
Understand the Difference Between a Traditional HR Tech System and an AI-Powered Platform
Whichever variety of AI HR/L&D services you will soon be using — and it’ll probably be a combination of both — it helps to understand how different this form of enterprise product really is in terms of how you use it. Transactional systems (e.g., payroll, learning management, etc.) are designed to capture data quickly, safely and with integrity. They are built on relational databases (rows and tables), which model typical business transactions.
That’s fine for many use cases we’ve had to manage until now, but AI systems (particularly large language models) have a fundamentally different approach to data (which, as we’ve just seen, is not just your data but lots of inputs). They don’t just store this data as-is; they try to interpret what the various bits may mean. For example, they may take a string of text, or even images, video and audio, and break it down into “tokens” (groups of letters) that cluster together.. Once they look at these clusters of data, the AI figures out which clusters are similar, which go before or after others and more. Without knowing what the data really means, they’re “intelligently” interpreting what’s going on. In practical terms, that means previously hard-to-model concepts like salary, prior work history, employee sentiment can finally be analyzed in depth.
In your day-to-day, that means you will be able to take the data about turnover in your company, run your AI across it and see what it generates. You may be shown that some managers, job roles or locations have higher turnover than others. You may also be shown patterns based on real trends in your company. For example, people with certain college degrees at certain ages are more likely to leave than others; the machine will then show you the benchmarks for your industry. You may see that turnover in your sales function is actually lower than your competitors, but your marketing cohort’s turnover is higher. As a result, you now have real knowledge you can use to fix the problem.
With more data and these new, powerful ways of assessing data, HR analysis and predictions become more accurate and more useful. What you will do with them is going to be your next problem.
Use AI-Powered Insights to Drive Decision-Making
Here’s an example of AI’s potential decision-making benefits. A few years back, Liberty Mutual conducted a study to identify the best candidates for auto insurance sales. After looking at college degrees, GPA, work experience and extracurricular activities, its model found that the most predictive factor in performance was “having worked in auto sales prior to coming to Liberty.”
With this valuable insight, significant savings in time and resources can be achieved and recruitment sharpened.
AI is going to provide equivalent ways to build accurate predictive models that can cut to the heart of problems and identify HR or learning strategies that will work because they’re based on real evidence, objectively analyzed. Organizations will no longer have to rely on instinct, standard models and “how we do things round here.”
Related Article: AWS's Diya Wynn: Embed Responsible AI Into How We Work
Exercise Caution Over Training Data
Ultimately, AI is just math — in fact, mostly a form of math many of us learned in college: vector calculus. The algorithm is indifferent to its subject matter, be it women without degrees or disabled workers who haven’t had a chance at promotion.
But — you aren’t. That’s why HR must dedicate itself to the integrity of the data it provides these new decision support systems. When you talk with an AI vendor, whether they are offering an add-on or a truly advanced solution, one of your first questions should be, “How does your system train itself?” or “What data does it use for training?”
These fundamental questions are more important than you realize. The typical HR system of record, for example, does not have much data to interpret. In many cases, your HRMS stores employees’ names, ages, addresses, job history and sometimes, training data. If you want to use AI for skills assessment, succession management, pay analysis or other strategic purposes, you’re going to need more data.
But if the data you collect is biased or skewed (e.g., filled with lies, incomplete surveys that left out minorities, distorted population studies, etc.), the AI will accurately produce a biased result (possibly determining all high performers are Baby Boomer males, for example). Added to this, tools like OpenAI’s ChatGPT can also “hallucinate” and bluff answers with zero evidence.
These risks are why AI engineers are so focused on safety ethics, bias reduction and explainability. They are also risks that HR teams need to be on top of, as an answer that might seem right based on unethically-compiled data is not going to be right for your company’s purpose and place in society.
HR is on the brink of an exciting era, filled with exciting advancements. But we also need to exercise responsibility when engaging with these new tools and technologies.
This article is based on Understanding AI in HR: A Deep Dive, which is available here.
Learn how you can join our contributor community.