recording light on
News Analysis

Your AI Notetaker May Already Be Breaking the Law

5 minute read
David Barry avatar
By
SAVED
AI meeting notetakers are standard workplace tools. A new lawsuit suggests they shouldn't be treated that way.

AI meeting notetakers have become as common a part of online meetings as someone saying, "You're on mute." The bot joins the call and a transcript appears. But how many people read the terms?

For employers, it’s a simple decision: productivity up, admin down. But a class action lawsuit filed against Otter.ai is about to complicate that.

The case, Brewer v. Otter.ai, filed Aug. 16, 2025, in the U.S. District Court for the Northern District of California, centers on plaintiff Justin Brewer, a California resident who does not have an Otter account. His February 2025 sales call was recorded, he alleges, because another participant had the tool running.

The complaint names violations of the federal ECPA, the Computer Fraud Act, California's Invasion of Privacy Act, and the state's Unfair Competition Law. It has since been consolidated into broader proceedings under the caption In re Otter.AI Privacy Litigation.

No ruling has been issued yet, but employment attorneys say the lawsuit has revealed a compliance gap that spans federal wiretap law, state biometric privacy statutes, GDPR and the incoming EU AI Act. HR teams are not prepared.

At the heart of the complaint is a design choice. Otter's notetaker seeks permission only from the meeting host, and even then, only if the host is not themselves an Otter user. Other participants cannot disable the tool. If the host has integrated their calendar with Otter, the bot joins and begins transcribing without any affirmative consent from anyone in the room. The lawsuit further alleges those recordings were used to train Otter's speech recognition models.

Otter has publicly maintained that its privacy policy discloses AI training to users who explicitly grant permission, and that responsibility for obtaining participant consent rests with the account holder. Courts may find that position difficult to sustain, given that Otter controls the recording infrastructure, benefits from the data and builds the tools that make consent bypass possible.

One AI Notetaker, a Dozen Consent Laws

"Even without a ruling, the concern is clear," said Steve Styczynski, VP of AI applications for North America at AudioCodes, an enterprise voice AI company that sells meeting intelligence tools. "In some cases, recordings are being stored externally or used beyond their original purpose,” he said. “The issue is not just the technology itself, but the gap between how it is being used and how it is being governed."

The consent problem is more complicated than most employers realize. Federal wiretap law and many state counterparts follow a one-party consent rule, but about a dozen states require all participants to consent to being recorded. In a virtual meeting, participants dial in from anywhere, and nobody is tracking where they are.

"Companies are not tracking participant location in real time or tailoring consent notices accordingly,” said Matthew Marks, partner at Ricotta & Marks, who advises employers on workplace liability.

The generic "this meeting may be recorded" disclaimer that most organizations rely on may not satisfy stricter state requirements, and employees need specific training to understand that recording a meeting may trigger legal obligations across multiple states.

"If you have a hiring call with a candidate in Illinois, your HR manager in California, and a hiring manager in Florida, you could be dealing with three different legal frameworks at the same time,” said Michael Goldfarb, founder and lead attorney at Guardian HR, which helps employers navigate employment law compliance. Those frameworks overlap in ways that compound exposure, and vendor terms of service frequently shift liability back onto the employer.

Employers deploying these tools assume the vendor carries the risk, and often they do not, said Anatoly Kvitnitsky, CEO and founder of AI or Not, which develops AI detection technology for enterprise environments.

When the Transcript Is Wrong

Discrimination risk is less discussed but potentially more damaging. AI transcription systems sometimes misidentify accents and speech impediments, producing transcripts that misrepresent what a speaker said. If those transcripts feed into hiring screenings, performance evaluations or disciplinary proceedings, legal exposure shifts from privacy to discrimination.

"Under Title VII and other employment discrimination statutes, you don't need discriminatory intent," Goldfarb said. "You only need discriminatory outcome. If your AI tool is systematically garbling the words of candidates with certain accents and that's in turn affecting their scores, you have a disparate impact problem."

Biometric privacy law adds another layer. Illinois' Biometric Information Privacy Act (BIPA) imposes requirements on biometric identifiers, and AI notetaking tools that use speaker identification are almost certainly capturing voiceprints. "BIPA has produced some of the largest class action settlements in employment law," Goldfarb said. "We're talking hundreds of millions of dollars." Exposure exists regardless of whether the tool was selected by IT, approved by HR or downloaded by the employee.

"You have to treat AI transcripts as unverified drafts that require human review,” Marks said. “Never rely on them as the sole basis for employment decisions." Few companies have implemented this as a compliance requirement, he said.

Laws Already on the Books

State-specific AI laws on the books are not being followed. New York’s Local Law 144 mandates bias audits before employers use automated tools in hiring, plus notice to candidates. Illinois has its own AI Interview Act. California continues to expand its AI accountability framework.

Consequences are heavy. The federal ECPA permits statutory damages of up to $10,000 per violation. BIPA settlements have reached into the hundreds of millions. Clearview AI's settlement in 2025 came in at $51.75 million.

"Most aren't compliant yet, and many don't even know that these requirements exist,” Marks said. The scope of these laws is broad enough to capture AI notetakers that analyze voice data or feed indirectly into employment decisions, which means tools most companies regard as purely administrative may already be regulated.

Who Owns the Notetaker Decision?

Banning AI notetakers isn't realistic. "The more effective approach is to provide enterprise-approved solutions that operate within the organization's governance framework, where data processing, storage and usage remain fully controlled,” Styczynski said.

That means vetting the tool, disabling high-risk features such as voice recognition that risk biometric identification, setting short data retention windows and writing a policy defining when and where AI notetakers are permitted. Participants should also have a real-time mechanism to opt out — not a support ticket, but a button, Kvitnitsky said.

Learning Opportunities

There’s a bigger risk, Kvitnitsky warned. "Deepfakes are now being used during live video interviews," he said. "Workers onboarded remotely may be using an AI-generated identity document."

Tools transcribing your hiring interviews are running in an environment where the candidate on the other end of the call may not be who they say they are.

AI notetakers are not a single-team decision anymore, Styczynski said. "IT may manage the platforms, legal defines the compliance boundaries and HR is often closest to how the tools impact employees. Organizations that manage this best are the ones treating it as a shared responsibility with clear governance, rather than a siloed decision."

Marks identifies the lack of that coordination as the most common problem he encounters. HR leadership needs to own the policy and compliance layer, with legal sign-off, and someone needs to be accountable when something goes wrong, Goldfarb said.

While the Otter.ai case is still pending, it has forced a conversation we haven't been having in the digital workplace. AI notetakers are not passive utilities, but data collection systems running inside employment relationships, across jurisdictions and in evolving legal environments.

Employers who treat them as such are in a different legal position than those who looked the other way and told themselves the disclaimer was enough.

Editor's Note: Where else is AI hitting a legal wall?

About the Author
David Barry

David is a European-based journalist of 35 years who has spent the last 15 following the development of workplace technologies, from the early days of document management, enterprise content management and content services. Now, with the development of new remote and hybrid work models, he covers the evolution of technologies that enable collaboration, communications and work and has recently spent a great deal of time exploring the far reaches of AI, generative AI and General AI.

Main image: lucas alexander | unsplash
Featured Research