Ready to Roll out Generative AI at Work? Use These Tips to Reduce Risk
The buzz around generative AI right now is inescapable, and most executives will find it similarly impossible to avoid ideas from employees about how they might use it in the workplace. After Microsoft’s $10 billion investment in OpenAI, the company is moving quickly from its ChatGPT Bing integration to incorporate GPT-4 into its Office suite and business applications. Google also recently announced plans to bring generative AI into its productivity suite, and a number of productivity app providers have already announced new generative AI capabilities.
But the buzz isn’t just about big tech vendor announcements: Generative AI tools are far more accessible and applicable to average users than the metaverse, cryptocurrency, NFTs or other recently hyped technologies, so it’s more likely they’re going to be around for a while. What’s more, many of these users are excited by what they’ve seen so far, whether they’re thinking in terms of saving time, sharing knowledge, boosting creativity or just having fun.
The stakes here are substantial. The already incredible rates of adoption and engagement will continue to grow as these models become more powerful and as more applications emerge. We’re likely to see notable changes in the way we interact with each other and with institutions because of generative AI. The key question is: How much effort are we willing to put in to make these changes responsibly? Can generative AI systems can in fact be deployed responsibly in a workplace environment? Is it possible to mitigate the risks and harms enough to realize the productive benefits these tools promise?
As we often see in the tech industry, a host of hackers, journalists and skeptics are working to show how these generative AI tools can be manipulated and misused. These demonstrations have yielded valid concerns about inaccurate and inappropriate content, large-scale disinformation, intellectual property violations, privacy and security breaches and more. The rise of generative AI also opens broader questions about the role of such systems in the workplace and in society. Will generative AI drastically change the nature of certain jobs or indeed replace human workers in some fields altogether? While most experts agree anything like machine-based sentience is not a near-term possibility — if it’s even a possibility at all — the fact that the industry is pushing so earnestly toward software that can believably mimic our art and language warrants thoughtful consideration.
Because AI implementations are so different from one organization to the next, the best way to consider responsible implementation is through a detailed impact assessment. First, identify the likely implications of the particular use case in question. Then, decide whether the organization can implement proper controls so the system operates responsibly. Many of these controls will fall into familiar responsible tech categories, so you may be able to capitalize on current investments. Here are seven categories of controls that will likely be necessary for generative AI systems.
Related Article: Beyond ChatGPT: Generative AI, the New Workplace Productivity Tool?
Ongoing, Risk-Based Oversight
Even though they’ve continued to improve, generative AI systems still regularly produce unreliable, biased or otherwise inappropriate content. In some cases, such as generating first drafts of internal emails, this type of content is easy to fix. In other cases, such as a real-time chatbot, issues with content can be significant.
To reduce this risk, you may need to monitor AI output and remove or filter out unwanted language and/or images before they’re published, especially for interactive or decision-support use cases. In cases where monitoring might be intrusive, you may want to give users an easy way to remove content themselves and report any concerns.
Respect for Intellectual Property
When in doubt, err on the side of showing respect for original content creators and avoid duplicating others’ work however possible. One way to thread this needle may be to guide employees to use generative AI output only as inspiration and never as a final product.
Information Security and Privacy
Some generative AI systems offer much greater control over data than others. Microsoft’s Azure OpenAI Service, for example, offers more advanced data protection capabilities than ChatGPT.
How McDonald’s Drove Productivity Through an Elevated Employee Experience
In the new remote/hybrid workplace, work/life boundaries are blurred and workplace stress is a top driver of mental health needs.
How to Future-Proof Your Employee Experience Strategy in 2023
A framework to navigate through economic uncertainty
Challenges to Efficiency in 2023: Your Employees Need the Digital Workplace of the Future
The era of asking employees to do more with less is upon us
The Essential Role of Communicators in Fostering Wellbeing in the Digital Workplace
Join us for practical insights on how digital communicators can support employees to thrive in the digital workplace
Addressing Employee Needs and Wants with a Digital Workplace
The workplace is getting more and more digital – both in how we work and where we work
Maintaining a Human-Centered Approach During Digital Transformation
When it comes to digital transformation - people drive change, not technology
The best place to start here is to make sure employees follow existing corporate security and privacy policies, although you may also consider restricting or prohibiting certain tools for corporate use. If employees are using these tools, they should consider the risks and implications of sharing any sensitive personal or corporate data in any prompts. Beyond that, it’s best to review any output generated for sensitive data, and conduct a security assessment for any integration between generative AI and corporate systems, data or assets.
There’s a good chance some people will be upset to learn the “person” whose content they’ve been consuming, or who they’ve been interacting with, is really a machine. Make clear to users, customers and others when the content they’re interacting with was substantially AI-generated. Disclosures should be especially clear in cases where affected stakeholders would reasonably assume or expect they’re viewing human-created content or interacting directly with a human, as well as in situations that normally call for human-level empathy.
Related Article: Generative AI Results Should Come with a Warning Label
Sustainability and Responsible Sourcing
You should also consider the known, often harmful trade-offs related to the training and operation of these systems, such as environmental costs and human labor considerations. Organizations that value environmental responsibility should consider tracking and reporting on the carbon footprint of generative AI systems they have in place and discuss strategies to limit or offset related energy use. They should also monitor coverage about the labor that went into training these systems and talk with their providers about plans to address these practices going forward.
Executives should have very specific conversations about how generative AI has the potential to substantially enhance or detract from employee and customer experiences. Your approach could largely determine whether AI makes users’ and customers’ lives better or worse. You might start by creating guidelines for generative AI that align with your corporate purpose and values. You may also wish to talk about how to use generative AI systems to support employees’ creativity and work quality, and share lessons about how employees can use these models to strengthen their sense of belonging and their contribution to the workplace.
Related Article: ChatGPT Opens the Floodgates for AI in HR
Generative AI tools, applied broadly, are likely to affect not only organizations but society as well. We just don’t know yet whether that impact will be mostly positive or mostly negative. To do your part, help identify and mitigate widespread and long-term societal risks of generative AI systems, such as weaponized deepfakes and misinformation. This means helping monitor for potentially harmful behavior among your employees and also helping them spot and address misinformation. In addition, it’s worth discussing how your organization can use these systems to support social goods like education, financial opportunity, healthcare and mental health, rather than simply as tools that serve to increase digital noise.
Learn how you can join our contributor community.
About the Author