It may be in its infancy, but generative artificial intelligence is already being used as an aid at work by many employees. But the emerging technology is also creating several issues in the workplace: from intellectual property (IP) and copyright, to data privacy, to bias and accuracy. These layered issues for employees are tied to both inputs and outputs in the growing field of generative AI.
A group of AI and HR execs shared details on each of the key issues with Reworked:
1. Disclosing Intellectual Property
Amy Casciotti, VP of HR at TechSmith, a maker of video and photo software, said when employees use generative AI technology, there must be an added layer of scrutiny due to “legal issues related to confidentiality and IP ownership rights still being figured out today.”
For instance, Casciotti said, if an employee “submits proprietary code to a generative AI application, who owns the rights to the additional content produced from that inquiry?”
Employees working with public generative AI tools, such as ChatGPT, “should be careful about releasing any IP to the open ether,” agreed Samta Kapoor, leader of AI energy and trusted AI at EY Americas.
Unless the generative AI instance is secure, IP-related inputs from an employee are “used to train models and can be made publicly available,” Kapoor said. This means that in terms of output after such a case, there’s “no clarity on who owns the content that is created.”
Matt Casey, data science content lead at AI development firm Snorkel AI, added that “every time” an employee uses the public generative AI ChatGPT, the tool “learns from that interaction.” As a result, if they use the AI tool to “work directly on sensitive internal documents,” it will “internalize that information,” Casey said.
“The allure of generative AI is so strong, and the barrier to entry so low that many employees are unwittingly entering sensitive or proprietary information,” said Chris Hetner, chair of the Customer Security Advisory Council at cloud data management company Panzura.
“This is creating a raft of vulnerabilities that can lead to unauthorized access or the unintentional disclosure of confidential business information,” Hetner said. “There’s a very real possibility that intellectual property could be leaked.”
2. Exposing Private Data and Non-Compliance
TechSmith’s Casciotti said that because of employee and customer privacy concerns, there must be an extra level of scrutiny on generative AI.
“Generative AI is an emerging industry with lots of potential, but it can pose serious vulnerabilities and risks if left unchecked,” she said.
Hetner agreed: As the corporate adoption of generative AI increases, it raises crucial questions about security, privacy, data handling and compliance,” he said. For example, employees using free generative AI tools can expose personally identifiable information (PII), leading to material compliance and financial exposures.
He believes companies should be implementing “data classification efforts to protect data from unintended breaches” and help manage the “tradeoffs between the potential value and inherent risks” of generative AI.
In his view, regulatory oversight of what he refers to as “next-generation” AI, such as generative AI, is “inevitable,” and organizations need to “start considering how it might change the rules around AI adoption before they become too dependent on it.”
Abhishek Shah, founder of Testlify, an AI talent assessment company, agreed that the use of generative AI in the workplace may fall under various regulations, depending on the industry and geographic location.
“Companies need to ensure compliance with relevant laws, such as GDPR, to protect both their employees' and customers' rights,” Shah said.
To secure corporate data tied to generative AI, Casey said the safest avenue for a company may be to build its own internal large language model (LLM). “This frees the company of additional data leakage opportunities and allows it to control the model's performance, instead of being at the whim of what an outside company wants to do with its core model,” Casey said, though he concedes, hosting and building an internal LLM can be “a lot of work.”
Related Article: Generative AI in the Workplace Is Inevitable. Planning for It Should Be Too
3. Missing Built-In Bias
Justyna Dzikowska, head of marketing at Brand24, a brand monitoring company, said because generative AI models are trained on data, “if that data contains biases, the AI will inevitably learn and reproduce those biases.”
For example, Dzikowska’s company used a generative AI tool to help analyze social media sentiment about its brand. “We found that the AI was disproportionately categorizing neutral or even positive comments from certain demographics as negative,” Dzikowska said. “This was a clear indication of bias in the AI's training data.”
While she says companies using AI need to consider the source of the biases to try to catch them, “sometimes, this takes even longer than just creating something without AI.”
When training data in generative AI models is biased in any way, it can lead to the perpetuation of stereotypes or unfair representations of certain groups of people, including those with disabilities, said Seth Besse, CEO of Undivided, a support platform for parents of children with disabilities.
“This can inadvertently create content that may not be inclusive or sensitive to the diverse experiences of users,” Besse said. “It's best to continue testing and monitoring its output and offering feedback on how to improve it.”
Shah said when companies deploy a generative AI model with “inherent biases in the data,” they can inadvertently reinforce biased decision-making or content generation, leading to potential discrimination in hiring, communication or customer interactions.
Vikas Kaushik, CEO of TechAhead, a mobile app development firm, agreed that a biased generative AI model can unintentionally produce material or make judgments that reinforce pre-existing biases, such as discriminatory hiring. “This might result in the unfair treatment of some groups, which would undermine organizational efforts to promote diversity, equity and inclusion,” he said.
Related Article: Generative AI Writing Job Descriptions: Adult Supervision Required
4. Opening up Copyright Issues
Because generative AI tools trained on vast amounts of data from the internet contain copyrighted material, companies that equip employees with generative AI can face copyright infringement issues.
For example, an AI-generated article can accidentally plagiarize existing articles if the AI is not sufficiently trained to recognize plagiarism, said Trevor Bogan, regional director of the Americas at Top Employers Institute.
“Companies that use generative AI must ensure their AI is trained to follow copyright laws to avoid legal issues,” he said.
Eldad Postan-Koren, co-founder and CEO of WINN.AI, said generative AI tools often learn from resources that are not approved for commercial use, creating what he says are serious copyright infringement risks for organizations.
“The data utilized might include photos or other copyrighted content that the AI model learned from without obtaining the necessary permissions or licenses from the copyright holders,” Postan-Koren said.
Yet, employees everywhere are using generative AI models to produce content — including text, images and music — that “closely resembles existing copyrighted material,” Shah said. “If not appropriately supervised, this could lead to inadvertent copyright violations, resulting in legal disputes and financial penalties for the company.”
5. Accepting Inaccurate Outputs
Mark Berry, a senior HR specialist at HR services firm Insperity, said that while generative AI has the “capability of generating written content with the appearance of accuracy, employers need to be aware AI does not necessarily reflect reality.”
For instance, Berry said the AI tool can pull information from outdated sources, creating compliance and legal concerns, as well as hallucinate — a term used to refer to information made up by the AI but passed as fact, such as citing sources that don’t exist but seem credible at first glance.
“If no review process takes place, businesses may discover the errors too late after submitting the work to the client or publishing it,” Berry said.
Employees at customer feedback platform provider Survicate are using generative AI to “transform the way we collect and analyze customer feedback,” said CEO Kamil Rejent. And one significant challenge they’ve encountered in the process, he said, is ensuring the accuracy of the AI's output.
For instance, in one of the company’s customer feedback analysis projects, Rejent said the generative AI misinterpreted some of the feedback due to nuances in language and context that it couldn't fully grasp. “This led to a skewed analysis, which could have resulted in misguided business decisions if not caught in time,” he said. “It's crucial to have human oversight to ensure its accuracy.”
David Janovic, founder and CEO of RJ Living agreed that when employees turn to generative AI, accuracy can be a major issue, including retrieving out-of-date information and hallucinations or the tendency to embellish or simply make things up.
“To ensure that the materials produced by generative AI are factually correct, employees need to research and verify all facts within them,” Janovic said. “This can end up being a time-consuming process and takes away from some of the efficiency of using generative AI.”