What to Know About Regulation of AI at Work
When it comes to artificial intelligence, regulation is not a question of if, but when.
Maybe you’ve heard that song and dance before. And sure, the same could be said about all regulations in the workplace. AI seems different, though. It continues to be under nearly constant scrutiny recently.
In the US, which often lags behind its European counterparts with nearly any business limitations, the proactive attention AI is getting seems nearly unprecedented. Consider that it took more than three decades to go from the Civil Rights Act to the Americans with Disabilities Act. The government is not known for moving quickly, even when the matter is about protecting millions of people.
HR and technology leaders should expect action sooner rather than later.
An EEOC Commissioner Speaks Out About AI
"Whether employers rely on algorithms, human HR professionals, or both, they must develop and implement policies to handle various, more nuanced employee situations. If an employer uses AI for reviewing performance and tracking productivity, the employer should ensure that their AI system allows for — and accounts for — reasonable accommodations related to disability, pregnancy and religious observance."
That’s likely the softest way to beg employers and technology providers to take action so that the EEOC doesn’t have to step in.
Sonderling didn’t write the op-ed on behalf of his fellow EEOC commissioners, but his viewpoints should be concerning to anyone counting on a hands-off approach to regulating AI. Sonderling is a political appointee of regulation-averse Republican politicians, including both former President Trump and former Florida governor Rick Scott, and has historically contributed to conservative PACs and candidates.
In short, if this is what a conservative member of the commission is speaking out on, you know additional rule-making could follow. Organizational leaders should be prepared to act.
Related Article: Can Enterprise Workers Really Work Well With AI?
Not All AI Is Equally at Risk
Do organizations need to throw away any initiatives or programs that use AI to stay safe? No, of course not. There are hundreds of benign tools that range from displaying training and learning information in the flow of work to transcribing Zoom or Teams meetings.
Others deserve a little more caution. For example, AI tools that guide employee benefit decisions could pose a possible privacy or discrimination risk if implemented poorly. 401(k) plans that use AI-driven investing options should be clear with employees about the potential risks.
Top 10 Challenges For the Workplace of the Future
The workplace is changing in ways we couldn’t have anticipated. Here are the top considerations for organizations as they adapt.Register
Making the Employee Experience Empathetic to Frontline Workers
Learn how leading organizations use EX tools to connect people with the resources they need in the field or on the move.Register
If Employee Experience Isn’t Your Department’s Top Priority, It Should Be
Learn how to build a work environment that enables people to do their best work and creates more satisfied and productive teams.Watch Now
Making Teams Work: The New Era in Unified Communications
Learn how Mondelēz International’s unified communications team is improving employee experience with better communication.Watch Now
But there are areas in talent management with some big red flags. Hiring, promotion, performance, compensation, discipline and employee relations all are areas where clarity about what the AI does and how it mitigates possible risk is worth keeping a close eye on.
It’s also worth taking into account employee perceptions. Generally, employees are accepting of technologies that help them directly succeed. They are skeptical of nearly everything else. Clear communication about the purpose of AI, how it works and even offering alternatives is critical.
Related Article: Artificial Intelligence in HR Remains a Work in Progress
How to Plan for Uncertainty
There’s no telling when or how AI regulations will come and what impact they will have on organizations and technology providers.
Organizations that use tools with opaque or untested AI-driven tools for critical areas like performance management or hiring should have already re-evaluated their use of them after EPIC lodged a complaint with the Federal Trade Commission against HireVue. That complaint will enter its third year in November.
Companies that use more well-defined AI tools still should be keeping a close eye on changes in the regulatory environment. Being dependent on an overconfident solution provider to keep you in the clear is the last thing you want to do.
Unfortunately, government agencies and Congress have a less-than-stellar record of clear technology regulation. That might mean some of those benign tools get caught up in overzealous or murky rule-making.
In any case, it’s worth keeping a close eye on in the coming few years.