Artificial intelligence (AI) is carving out an indispensable role across industries. Among AI's diverse array of capabilities, generative AI (GenAI) is currently in the spotlight, due to its ability to create new content from a minimal “prompt” by a human user. Whether it's generating human-like text, composing music, creating a new logo design or making a short movie scene, generative AI is making headlines.
While all of this is exciting, businesses need to be aware of the potential pitfalls that accompany the integration of generative AI into the workplace, and the issues that may arise in coming years in the absence of sufficient guardrails.
Generative AI and Ethical Concerns
The ethical considerations generative AI raises is one of its significant drawbacks. As AI algorithms learn from existing data, they can inadvertently perpetuate biases inherent in that data. This ”algorithmic bias” can lead to unfair outcomes or decisions when applied to things like credit worthiness or recruitment. Organizations must ensure they have robust strategies in place to mitigate these biases, which is much easier said than done. Just ask a generative AI tool to show you pictures of a pilot or a nurse, and you’ll see how gender bias is already built into image generation technologies.
Moreover, generative AI's ability to create realistic text or imagery raises questions about authenticity and truthfulness. For instance, the proliferation of 'deepfakes' — synthetic images, audio and videos that are indistinguishable from real ones — is a consequence of generative AI. This can lead to potential misuse in the workplace, creating confusion or misleading information.
In the UK, a television personality was recently impersonated by a deepfake, highlighting a fraudulent investment opportunity. To the knowing eye, the words and the lip movements seemed off, but to the untrained eye watching on a small screen, these small differences could easily be missed. The TV personality had not granted permission to use their voice and likeness, which raises further questions about ownership and liability. These are all real-life challenges enterprises need to think about before venturing into the wide-scale use of GenAI.
Related Article: Employees Are Using AI, They Just Aren't Telling You
GenAI and Data Privacy and Security
Data privacy is another critical concern with AI, especially when specific outcomes are expected. Generative AI requires massive amounts of data for training, but for many people this is a complete black box. ChatGPT 4 was trained on billions of parameters, including the Internet, textbooks, software manuals and even multiple programming languages, but the only safeguard enterprises have that the training material was safe and unbiased is the word of OpenAI. This poses potential risks around data privacy and security if the company plans to use the responses generated by ChatGPT 4 in a public setting or domain and go unedited by employees.
Using personal or sensitive corporate data in training models without appropriate anonymization can also be problematic. Unbridled use of large language models (LLM) by employees could inadvertently provide other users with company IP, as was the case with a small number of Samsung employees, who leaked internal source code by uploading it to ChatGPT. Unrestricted versions of ChatGPT will use that data as part of its future training, so the Samsung source code could end up in a response to a different prompt by a completely different user. The misuse of such internal data can lead to severe consequences, including regulatory penalties, reputational damage and breach of trust among employees and customers. As of May 2023, Samsung has banned the use of third-party generative AI tools in the workplace.
Dependence and Overreliance
While generative AI can automate many tasks, overreliance on it can lead to skill atrophy among employees. Companies must maintain a balance between AI integration and human skill development, ensuring that employees remain actively engaged and continuously develop their abilities.
If generative AI is used for more simple tasks, how is the workforce of tomorrow going to learn to master more complex tasks? Software developers are a great example. Many junior developers cut their teeth with simpler tasks, which co-pilot capabilities like GitHub’s or Amazon’s Code Guru are now able to complete. Can these tools replace software developers entirely? The answer is no. But could the tools be used for simpler tasks instead of junior developers? Yes, even if not quite yet. If the simple is automated, how will people develop the skills to become a senior software developer? Furthermore, what steps should we be taking now to ensure generative AI doesn’t erode the future of software developer talent by eliminating the stepping stone tasks that are the starting points in a developer's early career?
Related Article: 5 Generative AI Issues in the Digital Workplace
Technical Challenges
Lastly, despite its advanced capabilities, generative AI still has its technical limitations. These AI models are often complex, requiring substantial computational resources and specialized expertise to manage. Issues arise with the reliability and predictability of AI-generated content, and troubleshooting these problems can be challenging, even for experts.
Generative AI can also generate outputs that are difficult to interpret or make sense of, known as the black box problem. This lack of transparency makes it hard for businesses to understand how the AI is making decisions or predictions, limiting its utility and potentially leading to mistrust among users, and worse still, industry regulators.
Conclusion
These are still the early days for generative AI. And while its use in the workplace can bring numerous benefits, it is not without its pitfalls. From ethical dilemmas and data privacy concerns to regulatory issues and technical challenges, businesses should be aware of these potential drawbacks. AI implementation should be approached thoughtfully, with robust strategies for mitigating risks, promoting transparency and maintaining an appropriate balance between AI and human involvement. By doing so, organizations can leverage the power of generative AI while minimizing its potential downsides, thus driving innovation and growth in a responsible manner. The world has woken up to the potential of generative AI. Now it’s time we understand the risks associated with using this transformational technology.
Learn how you can join our contributor community.