Are You Giving Employees Guidelines on Generative AI Use? You Should Be
Since ChatGPT's launch in November 2022, conversation about the future of the workplace has allowed little airtime for anything but the astonishing development of AI language tools. It’s clear that multimodal large language models (LLMs) like OpenAI’s GPT-4 and Google’s Bard will change work as we know it. But no one can quite say how just yet. And given the rapid evolution of the technology, it’s likely to remain a moving target.
"This is moving very, very fast," said Bruce Schneier, a prominent voice on security in technology and a fellow and lecturer at the Harvard Kennedy School. “Here’s my advice: Any advice you receive, assume it will be obsolete in two weeks.”
But concerns about how these tools could affect a company’s security and legal risk are here now. Business leaders can’t wait to see what happens before creating a plan to ensure these tools don’t compromise important proprietary data or sensitive information.
Educating Employees on Privacy Implications
Companies such as JPMorgan, Amazon and Accenture made headlines in the wake of ChatGPT's release when they issued instructions or restrictions to their employees on its use. While it might have sounded like a response to a specific security threat, in most cases the restrictions were part of standard protocols on third-party apps or consisted of common-sense guidance like prohibitions on uploading customer data to any LLM platform.
The most prominent security risk at this point isn’t the AI tools themselves but the potential for misuse by employees. The call, so to speak, is coming from inside the house.
“The biggest risk currently is privacy risk,” says Sergey Shykevich, threat intelligence group manager at Check Point Research. "It’s not clear to me whether and how exactly OpenAI and other companies are using the data we input, how they store the data, how they might access it."
Shykevich isn't alone in his concern. Italy's data protection authority issued what was effectively a country-wide ban on ChatGPT on March 31, citing concerns with how its parent company, OpenAI, processes personal data in potential violation of the GDPR as well as its lack of restrictions for minors.
The most prominent danger, said Shykevich, is with employees uploading proprietary or sensitive data into an LLM platform. For example, someone may upload source code to an LLM in an attempt to find a vulnerability in the code, such as syntax problems, or to improve the level of coding.
If workers are doing things like this, he continued, “The big question is whether it’s possible that when my competitor asks for something similar, he will get my piece of code, especially if it’s something very specific. If it’s something specific, the danger is bigger.”
Another example is an employee in a customer support function uploading information like customer numbers, names, business information or any other personal information as part of a data set without first anonymizing it.
Shykevich emphasized that employees who know not to upload proprietary information into an open-source sharing site or personal data to social media may not have the same awareness about LLMs. “They might not realize that ChatGPT is just part of the internet,” he says. “They may think of it as a platform that’s somehow different.”
The most effective precaution in face of the privacy threat is for companies to institute robust employee training on LLMs. Employee education must emphasize that anything posted into a LLM interface should be considered publicly available. Guidance should be clear and specific, listing the types of data that employees can and can't include in LLM inquiries.
Learning Opportunities
Related Article: Ready to Roll Out Generative AI at Work? Use These Tips to Reduce Risk
‘Treat It Like an Intern’
Many companies are putting LLMs to work for marketing purposes, using ChatGPT, GPT-4 or Bard to help quickly produce outlines, blog posts, social media language or other resources. The risks here have less to do with privacy and more to do with accuracy. As Schneier puts it, “The problem with ChatGPT is it randomly lies and doesn’t realize it.”
This means that companies that rely on LLMs need thorough checks in place to ensure anything produced by these tools is correct and properly sourced.
“Treat the large language model like an intern,” said Schneier. “Don’t believe anything they say. Verify everything before you use it.”
LLMs might also produce copy that’s materially similar to existing content, which is the primary potential liability Schneier sees in using these tools to produce marketing material.
“The only one I could think of is plagiarism risk,” he said. “And honestly, your intern has the same problem.”
At this stage, having humans fact- and plagiarism-check anything produced by LLMs is likely sufficient to keep risk low. But as Schneier emphasized, no one knows what’s coming next. He mentioned the coming Microsoft AI integrations, which will create analysis and outputs based on a company’s proprietary data and will provide a source for every assertion it makes. "If it does half of what they say it does, it’s going to change the world," he said. "It’s Star Trek-level computer interface."
The risks will surely transform as fast as the tools they’re using. Accordingly, Schneier added a final note of caution: everything in this article will be obsolete in a month.