Since ChatGPT put AI into the hands of anyone with a web browser, regulation of the technology has been squishy. Employers who adopted AI for recruiting, productivity and workforce planning assumed the technology would develop faster than any kind of government oversight and dove into AI to boost efficiency and scale, usually without looking over their shoulders.
That’s changing. As we begin 2026, the regulation of AI is becoming real, and companies must think how the technology fits with employment law, data governance and workforce strategy. Making the effort more challenging is the patchwork nature of regulation today. States and municipalities such as New York City are implementing rules while the federal government makes noise about a national strategy for regulation. AI regulation is a moving target.
The Complicated State of AI Regulation
Regulation is also happening globally, which makes it more complicated. Europe and the U.S. aren’t in sync. In 2024, the European Union finalized the EU Artificial Intelligence Act and phased in enforcement. The law, which classifies AI solutions by risk level, puts obligations on so-called “high-risk” uses, including hiring, promotion, performance evaluation and workforce management.
The act also requires employers to use human oversight in hiring decisions, to document the training and the behavior of large language models, conduct risk assessments and maintain audit trails. This means HR can no longer treat AI as some kind of black box. Employers must understand how their systems work, how they make decisions and how to identify and correct errors or bias.
The U.S. hasn’t reached that level. No federal statute governs AI in HR-related matters, leaving companies to deal with a patchwork of enforcement actions, court rulings and state- and city-level rules. For example, New York City’s Local Law 144 requires bias audits and candidate disclosures for automated hiring-decision tools
While similar legislation is being considered in other jurisdictions, federal agencies and courts are applying existing laws to AI. For example, the U.S. Equal Employment Opportunity Commission has said algorithmic tools must comply with Title VII and the Americans with Disabilities Act. More recently, Eightfold.ai was sued over allegations the platform violates the Fair Credit Reporting Act by compiling reports on individuals without disclosure or an opportunity to correct errors.
The case illustrates how AI regulation doesn’t have to wait for new laws, but applies existing ones. Increasingly, courts and regulators treat AI as a new way to oversee familiar employment practices rather than a new category that deserves special treatment. If the plaintiffs in Kistler vs. Eightfold AI prevail, businesses could be required to provide disclosures and allow candidates to dispute reports.
HR Enters Unfamiliar Territory
Because so many AI tools are involved in hiring, performance management, scheduling and workforce planning, HR departments are the front line of compliance. Yet many employers lack an inventory of the AI systems they use, especially when the technology is embedded inside applicant tracking systems, HCM platforms or other software. In other cases, managers have adopted “shadow AI” tools without formal oversight, which creates compliance issues that leadership may not even be aware of.
This forces HR leaders into unfamiliar territory, where understanding how algorithms function is as important as understanding employment law.
So far, HR technology vendors market AI’s autonomy, predictive abilities and speed, but now those claims face scrutiny. Regulatory risk does not disappear when technology is outsourced. An employer is responsible for results generated by an AI vendor’s platform.
So, organizations are pressing vendors for clearer documentation, stronger audit support and contractual language that addresses bias, explainability and liability. At the same time, many contracts lack meaningful indemnification tied to regulatory enforcement, which leaves employers exposed if their vendor’s tools fail to meet legal standards.
‘Decision Support,’ not ‘Decision-Making’
To address such concerns, some technology vendors are repositioning AI as a decision-support rather than a decision-making tool, emphasizing the idea of having a “human-in-the-loop.” But regulators have made clear that nominal human involvement is not enough. Oversight must be real, informed and documented. And both U.S. and EU regulators have made it clear the ultimate responsibility for an AI solution’s performance lies with the employer.
Regulatory efforts are also complicated by the increasing use of agentic AI, or systems that take autonomous action rather than simply analyze data. Such capabilities are increasingly seen in screening applications, interview schedulers and performance applications. These platforms blur the line between human judgment and automated decision-making, which puts employers in an awkward position, because the more autonomy a system has, the harder it becomes to explain and defend its outcomes.
Regulators in both the U.S. and Europe highlight the need for transparency and explainability, particularly when a solution has minimal human oversight. Employers that use agentic tools may need to show not only that humans can intervene in a process, but that they can also understand how AI makes decisions and override them when necessary.
In addition, many employers are investing in AI literacy for HR leaders and people managers, recognizing that governance depends on understanding, not just policy. Cross-functional AI councils — bringing together HR, legal, IT and ethics stakeholders — are more common as organizations seek to manage risk across their operations.
AI Regulation Moves From Theory to Practice
All in all, regulation is pushing companies toward more disciplined, transparent and sustainable AI adoption. What makes 2026 an inflection point is not just the volume of regulation, but how quickly regulation is evolving. It’s here to stay.
AI hype is giving way to scrutiny and accountability. For HR leaders, that means thinking more strategically, so AI’s use aligns with the legal and human realities of work, as well as business goals.
The regulation of AI is moving from theory to practice, forcing employers to deal with how the technology intersects with employment law, data governance and workforce strategy. That task is complicated by a fragmented regulatory landscape, with states and cities often pressing ahead with their own rules even as federal officials signal interest in a broader national framework. For employers, the challenge is no longer whether AI will be regulated, but how to keep pace as the rules continue to evolve.
Editor's Note: Catch up on more takes on AI regulations below:
- Why AI Discrimination Lawsuits Are About to Explode — AI is reshaping hiring — and the courtroom. Job seekers are suing over biased screening tools, and experts say a wave of lawsuits is just beginning.
- When AI Discriminates, Who's to Blame? — Companies increasingly rely on third parties to automate one or more parts of their hiring practice. Who’s accountable when that process discriminates?
- Governing AI Amid Regulatory Uncertainty — The number of AI rules and regulations make it hard to understand what applies to your business. Start by mapping your AI use cases to the rules that apply.