Ready to Roll out Generative AI at Work? Use These Tips to Reduce Risk
The buzz around generative AI right now is inescapable, and most executives will find it similarly impossible to avoid ideas from employees about how they might use it in the workplace. After Microsoft’s $10 billion investment in OpenAI, the company is moving quickly from its ChatGPT Bing integration to incorporate GPT-4 into its Office suite and business applications. Google also recently announced plans to bring generative AI into its productivity suite, and a number of productivity app providers have already announced new generative AI capabilities.
But the buzz isn’t just about big tech vendor announcements: Generative AI tools are far more accessible and applicable to average users than the metaverse, cryptocurrency, NFTs or other recently hyped technologies, so it’s more likely they’re going to be around for a while. What’s more, many of these users are excited by what they’ve seen so far, whether they’re thinking in terms of saving time, sharing knowledge, boosting creativity or just having fun.
The stakes here are substantial. The already incredible rates of adoption and engagement will continue to grow as these models become more powerful and as more applications emerge. We’re likely to see notable changes in the way we interact with each other and with institutions because of generative AI. The key question is: How much effort are we willing to put in to make these changes responsibly? Can generative AI systems can in fact be deployed responsibly in a workplace environment? Is it possible to mitigate the risks and harms enough to realize the productive benefits these tools promise?
As we often see in the tech industry, a host of hackers, journalists and skeptics are working to show how these generative AI tools can be manipulated and misused. These demonstrations have yielded valid concerns about inaccurate and inappropriate content, large-scale disinformation, intellectual property violations, privacy and security breaches and more. The rise of generative AI also opens broader questions about the role of such systems in the workplace and in society. Will generative AI drastically change the nature of certain jobs or indeed replace human workers in some fields altogether? While most experts agree anything like machine-based sentience is not a near-term possibility — if it’s even a possibility at all — the fact that the industry is pushing so earnestly toward software that can believably mimic our art and language warrants thoughtful consideration.
Because AI implementations are so different from one organization to the next, the best way to consider responsible implementation is through a detailed impact assessment. First, identify the likely implications of the particular use case in question. Then, decide whether the organization can implement proper controls so the system operates responsibly. Many of these controls will fall into familiar responsible tech categories, so you may be able to capitalize on current investments. Here are seven categories of controls that will likely be necessary for generative AI systems.
Related Article: Beyond ChatGPT: Generative AI, the New Workplace Productivity Tool?
Ongoing, Risk-Based Oversight
Even though they’ve continued to improve, generative AI systems still regularly produce unreliable, biased or otherwise inappropriate content. In some cases, such as generating first drafts of internal emails, this type of content is easy to fix. In other cases, such as a real-time chatbot, issues with content can be significant.
To reduce this risk, you may need to monitor AI output and remove or filter out unwanted language and/or images before they’re published, especially for interactive or decision-support use cases. In cases where monitoring might be intrusive, you may want to give users an easy way to remove content themselves and report any concerns.
Respect for Intellectual Property
The terms of use that cover ownership of generative AI inputs and outputs are different for each provider. Government authorities may also weigh in occasionally as these technologies evolve — for example, the US Copyright Office recently issued guidance on the copyrightability of AI-generated content.
When in doubt, err on the side of showing respect for original content creators and avoid duplicating others’ work however possible. One way to thread this needle may be to guide employees to use generative AI output only as inspiration and never as a final product.
Information Security and Privacy
Some generative AI systems offer much greater control over data than others. Microsoft’s Azure OpenAI Service, for example, offers more advanced data protection capabilities than ChatGPT.
Learning Opportunities
The best place to start here is to make sure employees follow existing corporate security and privacy policies, although you may also consider restricting or prohibiting certain tools for corporate use. If employees are using these tools, they should consider the risks and implications of sharing any sensitive personal or corporate data in any prompts. Beyond that, it’s best to review any output generated for sensitive data, and conduct a security assessment for any integration between generative AI and corporate systems, data or assets.
Transparency
There’s a good chance some people will be upset to learn the “person” whose content they’ve been consuming, or who they’ve been interacting with, is really a machine. Make clear to users, customers and others when the content they’re interacting with was substantially AI-generated. Disclosures should be especially clear in cases where affected stakeholders would reasonably assume or expect they’re viewing human-created content or interacting directly with a human, as well as in situations that normally call for human-level empathy.
Related Article: Generative AI Results Should Come with a Warning Label
Sustainability and Responsible Sourcing
You should also consider the known, often harmful trade-offs related to the training and operation of these systems, such as environmental costs and human labor considerations. Organizations that value environmental responsibility should consider tracking and reporting on the carbon footprint of generative AI systems they have in place and discuss strategies to limit or offset related energy use. They should also monitor coverage about the labor that went into training these systems and talk with their providers about plans to address these practices going forward.
Human Flourishing
Executives should have very specific conversations about how generative AI has the potential to substantially enhance or detract from employee and customer experiences. Your approach could largely determine whether AI makes users’ and customers’ lives better or worse. You might start by creating guidelines for generative AI that align with your corporate purpose and values. You may also wish to talk about how to use generative AI systems to support employees’ creativity and work quality, and share lessons about how employees can use these models to strengthen their sense of belonging and their contribution to the workplace.
Related Article: ChatGPT Opens the Floodgates for AI in HR
Social Benefit
Generative AI tools, applied broadly, are likely to affect not only organizations but society as well. We just don’t know yet whether that impact will be mostly positive or mostly negative. To do your part, help identify and mitigate widespread and long-term societal risks of generative AI systems, such as weaponized deepfakes and misinformation. This means helping monitor for potentially harmful behavior among your employees and also helping them spot and address misinformation. In addition, it’s worth discussing how your organization can use these systems to support social goods like education, financial opportunity, healthcare and mental health, rather than simply as tools that serve to increase digital noise.
Learn how you can join our contributor community.