Get Reworked Podcast: Why Your Workplace Needs a Generative AI Use Policy
Generative AI is the latest shiny toy in the workplace technology toolbox. The difference in this case is the bar to use it is very low. ChatGPT and the like are freely available to anyone with access to a web browser and an internet connection.
In this episode of Get Reworked, employment attorney Peter Rahbar of The Rahbar Group discusses the potential risks of generative AI to the workplace and why organizations need to create guardrails around employee use.
Listen: Get Reworked Full Episode List
"I think it's very important in this moment, where we have a potential transformative technology, for the company to really take the lead on how it's introduced and used in the workplace. So I think an effective policy would not only describe what platforms and technologies are being governed here, but what type of information should and shouldn't be used with these platforms," said Peter.
Highlights of the conversation include:
- The risks — and rewards — generative AI creates for employers and employees alike.
- How generative AI adds to existing employee fears of being replaced.
- Why companies should create policies on generative AI use now, not later.
- Why we need a debate on the use of AI in HR.
- On where we are and where we need to be with AI regulation.
Plus, host Siobhan Fagan talks with Peter about the importance of transparency around AI use, the different effects of internal vs. external use, and whether or not he is indeed Peter Rahbar. Listen in for more.
Have a suggestion, comment or topic for a future episode? Send it to [email protected].
Tune-in Here
|
Show Notes
- Peter on LinkedIn
- The Rahbar Group
Episode Transcript
Note: This transcript has been edited for space and clarity
Peter Rahbar: I don't want to be the typical lawyer who shoots down every idea. I think there could be some wonderful uses for AI in the workplace. And certainly, if there's ways to help employees do their jobs more efficiently or better, or for from different locations, or gives employees access to more information than they currently have at their disposal, where more resources to help them do their jobs, I think those things would be welcome.
But, if it's being used solely as a tool for replacing employees, I think that's obviously that's not gonna be a fun place to work.
Siobhan Fagan: You just heard from Peter Rahbar. We brought Peter here today to talk about something that a lot of people are excited about, namely generative AI.
And we also brought him here to talk about something that not so many people are excited about. And that is governance.
He is here to make the case about why you should be creating policies for your workplace today, before your employees start using generative AI.
Peter is an employment attorney, a workplace issues expert, and he is the founder of the Rahbar Group. Let's Get Reworked!
Welcome to the podcast, Peter.
Talking Generative AI and Governance With a Real Human
Peter: Thank you. It's great to be here. And I just want to assure you that this is really Peter a human being and not an AI bot speaking to you today,
Siobhan: I appreciate that because we did indeed bring you here to talk about generative AI, one of the complexities that it does raise is people don't know what's real anymore. So I appreciate we are indeed talking to the real Peter.
I also feel before I jump in Peter, that I should acknowledge the day that we are recording this, which is April 10. Because we will be running this on May 9. And while that isn't too far away in generative AI years that can feel like years, right, because of the speed in which all of this is progressing. So I'll just put in that little disclaimer up front.
Your background is you are a workforce lawyer, you have worked as a practitioner within a publication firm, you now have your own organization. So you come at this from the point of view of both employer and employee, is that correct?
Peter: Yes, absolutely. In my practice now I represent a lot of executives, in fact, I would say that's the biggest part of my practice. But I also work with some smaller companies and I used to work with some very large companies. As you mentioned, I was leading a team at a major media company and I worked at an outside law firm for a long time.
So I like to think I see these issues from all angles. And this is one where I would say that both employers and employees really need to tread carefully in the coming days and months and years as we figure out what the right role is for AI in the workplace.
Siobhan: Great. So when you think about this, I mean, obviously, it's really hard at this point to kind of separate the hype from the reality. And again, it is all changing so quickly. So it is very hard to sort of say, oh, this is what the impact is going to be.
But in your opinion, can you see some sort of comparison from previous technological waves that maybe are comparable?
Peter: I would say there's been a lot of movements in the past decade, in the workplace where we thought some group of employees is going to be expendable. Various different forms of automation technology. Is there one in particular? No, because I don't think we've ever talked about or had technology that really compares and its potential capability.
But I can tell you for a fact that I've been told personally that for maybe 10-15 years now that my job could be replaced by AI. So, and that hasn't happened yet. So I'm hoping it doesn't for at least another good portion of time.
But I think when new technologies are introduced, whatever they are, there's usually enthusiasm and there's usually fear, that's happened many, many times, and it's happening again.
GPT-4 Passes the Bar, Congrats
Siobhan: So I should point out that GPT-4 did score in the 90th percentile on the bar exam. Is that sort of a grudge match with you right now, Peter?
Peter: You know when I saw that I tried to remember what I scored on the bar exam and what percentile I came in. And then I realized all that matters is that you pass, so congrats to GPT for for doing so well, but all you need to do is pass, so maybe it was overprepared for the exam.
Preparing Your Workplace for Generative AI
Siobhan: So I want to jump into some of the implications of generative AI in the workplace. And again, we don't know all the ways that it's going to play out. But I think that we can all agree that there are certain best practices that should be in place before a company introduces it into its organization.
So before we go into those best practices, talk about why they would want to introduce them, what are some of the legal risks that an organization could potentially open itself up to by using these tools?
Peter: There's many, and I think that the big headline is, there's a lot of things to be concerned about on the employer end. First of all, we have the issue of employees potentially inputting a company's proprietary or confidential information in order to generate results. And then having these platforms either reuse or spit out, or someone even attacking them to find out the origin of this information. And there's definitely examples of that happening.
But you also have issues with respect to the copyrights and intellectual property rights of others. And as we know, these platforms are basically reusing information that's inputted. And so some of the results, you know, while celebrated and fawned over our direct uses, and violations of other people's copyrights. So organizations need to be concerned about that.
They also need to be concerned about agreements they may have with vendors and how their information is used and reused, and who exactly is doing that, or is using them. And so there's a number of levels, legal levels for a company to be concerned about. And that's why I think, as a starting point, if you're a company right now, if you don't have a policy in place regulating use of these platforms, then you absolutely should contact your lawyer and get one going.
Siobhan: So you touched on a few, we honestly could probably devote the entire half hour on all of the different implications, all the potential legal problems. That would be very long and a very dull podcast I think.
Peter: We wouldn't want to do that to anybody.
Siobhan: So if an organization were to want to create a policy, I mean, isn't this just basic, good governance? Are there any specific things that they should include in a policy for this kind of technology that is outside of their normal, good governance?
Peter: Yeah, I think that's a really good framing of it, actually, because so many of the things I would probably say about this are covered by other policies.
But the key is here, you have something new and something very attractive and something that people are talking about. And employees are going to be inclined to sort of dabble and experiment and see what's going on, and sometimes could have drastic consequences for a company.
But I think it's very important in this moment, where we have a potential transformative technology, for the company to really take the lead on how it's introduced and used in the workplace. So I think an effective policy would not only describe what platforms and technologies are being governed here, but what type of information should and shouldn't be used with these platforms. What type of approvals are necessary. What purposes they can be used for, you know, is it for internal facing purposes? Is it for client purposes? What type of agreements need to be in place? You know, should the organization have their own agreement in place with a particular platform?
So I think a lot of those questions need to be covered. And I think it would be really smart as it is, in most cases, for a company to designate either one person or a small group of people to answer all questions and be a resource that an employee can turn to immediately if there's a concern, or if there's any confusion.
Siobhan: I like that you're distinguishing the use of these applications between internal and external because internal, the potential liabilities, they are not as huge as it would be if it's external.
And I'm, I'm thinking now with my editors hat on about how if I were to put an article out on the site, without anybody looking at it in advance, I mean, like just writing it and putting it out there, that would just be a bad practice. Whereas, I suspect there might be some of that going on with some of the text and the applications that are coming out of this generative AI, where we're seeing that it's now prone to hallucinations, but people are assuming that the information that it's putting out, because it's putting it out at such great confidence, they're taking it at face value.
Peter: Yeah, and I think one of the things when you're reading about uses and implications one of things that's always at the top is it's not always accurate. The outputs not always accurate. And sometimes it doesn't make sense. And, and so there's there's a level of human review that's really important.
And yeah, as you say, if that's an internal document, it's much different than if it's something that you're sending to a client or publishing. But Let's also not forget that just because you know, some things run internally doesn't mean you're not potentially violating somebody's IP or, or their rights.
So there are implications. Are they as great as if they're external, publish to the world? Usually not. But there's still implications there.
Siobhan: Are there any organizations that you would point to that you know of offhand, who are actually doing a fairly good job of this so far? Are we all just kind of figuring it out?
Peter: I think we're definitely in the figuring it out phase. And in fact, in following the stories over the past few weeks, one of the amazing things I'm seeing is that while the employee use percentages, and there's a lot of surveying going on, on this topic, of course, the employee use numbers are going up. I think the last one I saw is like 40%, 45% of employees are reporting that they're using ChatGPT or similar applications for workplace functions. But only 15%, 16% of employers are implementing policies.
I'm a little bit at a loss for why that is. I mean, I have sort of two theories on that. One is, you know, I think a lot of companies are really in a position of policy fatigue, you know, they've just been making what seems like endless policies, especially if you're New York City employer, you know, with the number of workplace laws that have come into play the last few years. I think there's some fatigue around that.
But also, I think there are employers are just not realistic about the impact on their workplace, they may think, well, oh, you know, I'm a nonprofit, or a company that doesn't really dabble in this technology so I don't need a policy just yet. Yet their employees are using it every day.
So I think it's important for employers to really, as I said earlier, take the first steps, you're in control of the narrative on how and when and why it's used in their workplaces.
Siobhan: Okay, Peter, I'm gonna put you on the spot. Do you have a policy for your workplace?
Peter: Do I? Well, I'm a solo practitioner. So you know, I, I don't have a policy for myself.
Siobhan: Have you tried these tools?
Peter: I have tried them. And I think it's important for every corporate leader to try them and understand what they are. Because this is this is very top of mind for employees, and for my clients, as well. So I have tried it, I have seen the output. And you know, it's impressive, and it's something you know, I think that's real, and it's gonna stick and it's just how far is it gonna go? I don't know, I think we have a lot more to play out and see where it goes.
But you know, one of the areas in fact that it's being used a lot in, in the workplace is HR. So that's very near and dear to my heart, and also to a lot of the work that you do. And and I think it's important to understand the implications there.
Siobhan: Yeah, I want to definitely touch on AI in HR, because that has a longer history that precedes the generative AI questions.
But I did want to just stick with the policies for just one minute because there are some proprietary or company specific applications that are coming out through some of the larger organizations.
So Microsoft, for example, has been through its partnership with OpenAI introducing a lot of OpenAI driven tools into Microsoft's suite into the Microsoft 365 suite. So in a case like that, do we know yet if the same policies apply? Or if it is a slightly different policy, because this is being trained on internal data?
Peter: Well, I think it's going to be dependent on what the applications are. And of course, you know, if Microsoft's introducing a new function, and there's already a license in place, there will have to be a company's agreement to use that particular application and function.
And I think if that's happening, it's in the company's best interest to be very open about it with its employees and explain it and put some parameters around it. And if it's a function that they don't want in their workplace, then they shouldn't accept that as part of their package.
But I think with respect to policies in general, we're really talking about what employees can do on their own and what direction they can take the usage outside of officially sanctioned, you know, applications.
Legal Can of Worms, Using Generative AI in HR
Siobhan: I think that the problems that you raised in terms of the disparity between the numbers of employees using it and the employers using it might have, it's a lot to do with that. I mean, we've seen shadow IT being used before where tools ended up being introduced to the company, because the employees had sort of adopted it before the company officially sanctioned it, but this one is opening up a whole different can of worms.
I think that's a legal term, right?
Peter: Oh, absolutely. We use it all the time.
Learning Opportunities
Siobhan: Legal can of worms. So, I do want to return to the question of AI and HR and particularly generative AI in HR. And so again, we're still exploring, we're still seeing, but what would you recommend specifically in the HR function, should companies do or not do with these tools?
Peter: Well, I have to be honest, this is the application I like the least, and kind of scares me the most.
It's not just because I have a lot of friends in HR or because their practice in this area, but it's an area where I think, and I think most people would agree, where the human touch is really important.
And also, I think the impact is disproportionately felt at lower levels of the workforce and workplace. So I feel very concerned about it.
And this is where governments are really focusing in on, I think, also, in fact, we have a law that's going to be coming into effect in New York City within the next week, which is really focused on bias and these recruitment tools, and the use of AI in the recruitment space.
So I really worry about the use. But this is one of the areas where the use is kind of exploding. And one of the other things we saw, and all the tech layoffs that occurred over the end of last year in the beginning of this year is that HR was a real target for layoffs. And I think the use of AI is part of the reason why.
Siobhan: When you say that AI was part of the reason why this is because part of their roles is being automated, or is it something else?
Peter: Yes, exactly. I mean, what we're seeing is the use of applications to screen resumes, to conduct video interviews with candidates, and to provide other assessments in the recruitment process that would traditionally be done by humans. And frankly, I believe should be done by humans.
I mean, is there a very basic level of resume review that can be done by AI? You know, many companies want to move in that direction. But many, many government agencies and municipalities are saying, wait a second, that's the most dangerous application of it.
So I think there's gonna be a pretty robust debate about what the proper role for AI as an HR is based on these very fundamental concepts.
Siobhan: Yeah, it's interesting, there was an article in MIT Tech Review not too long ago by David Rotman, and he had a line in it, where he was basically discussing that right now is when we get to decide how AI is used in workplaces, and that it's not a given that it has to automate.
And his quote was, "Companies can decide to use ChatGPT to give workers more abilities, or to simply cut jobs and trim costs." And I thought that was just a very nice compact way of seeing it. Because I think so often we are pushed towards seeing this as automation automatically, just to sort of cut the costs. But it could actually, in theory, be used and a conscious effort can be made to have it used to help people as opposed to supplement people.
Peter: Yeah, I think that's a really important concept that he mentioned there. And that's a concern. I mean, anyone who knows me and has worked with me knows that I talk a lot about corporate responsibility, and you know, responsibilities that companies have that are broader than their own profit interests.
And here, I'm certainly not on the side of the ledger that says, let's automate everything we can. There's important roles for humans to play. I think recruiting workers is an essential one. And as I mentioned, I think the inclination will be for entry-level roles, or perhaps more manual roles to use AI because there's maybe a belief that there's not a lot that differentiates employees. But you know, you're not going to do an executive search with AI. You're not going to pick your, your next CEO with AI, right.
So, again, we're talking about sort of subjecting entry-level employees to technology in a way that's not advantageous to them potentially. And so I think corporate leaders really need to be careful about that and acknowledge their sort of responsibility and engaging in these processes carefully and in a compassionate manner.
Aspire to Regulate AI, to Keep From Going off the Rails
Siobhan: I want to go back to what you mentioned a moment ago with the New York City regulations that are coming out in terms of AI and bias. And this is another quote, this one is from OpenAI founder, Sam Altman. And it's not a direct quote, but he basically suggested that we need an international body to regulate AI. Do you think that's realistic?
Peter: Probably not. But I think it's something we should aspire to, at least as a concept, right, is that we need some type of regulation, some safeguarding some guardrails here. And this is a moment to do it, you know, as people are introducing, and companies are starting to utilize, these technologies before they become too entrenched, and before they go off the rails, which is also a legal term, by the way.
But if you have one of the most important people in the development of this technology, saying that I think it's something worth listening to. And certainly I would hope that he dedicates some effort to further encouraging the development of that regulation, whether it's on a local level or international level.
Siobhan: Well, I'm thinking in terms now, like, even if a federal policy on regulating AI is feasible, we haven't managed to create a federal policy on privacy, so...
Peter: Yeah, well, these things all start in the cities in the states, and we'll build momentum, there has been federal legislation introduced, it hasn't really gone anywhere, there's been guidance from the EEOC on potential disability bias and using AI and recruiting. So there is some some movement there. And obviously, in the agencies, it's a pretty pro worker tilts right now. And that could change potentially in a year or two.
But for now, the this environment does exist, but it takes time. And if we look at the states that have passed laws in the cities that were still at the very early stages. So I think to expect the federal government to jump in this early is, I think they're going to wait and see how things play out a little before they jumped in.
And lastly, on that, I will say the the business lobby is already firmly against regulate, they don't like regulation of anything. So just add this to the list.
Siobhan: That always is the case, definitely.
So if I'm an organization, and I want to start exploring generative AI and its applications in my workplace, because I see it, it's inevitable, it's part of the future, etc. What would you recommend I do before introducing it to my workplace? What kind of language can I use? What can I do to make sure that the introduction of this tool does not cause panic or hurt morale?
Peter: I think the most important thing is just to be really transparent about what you're doing and why you're doing it, and what the progress is along the way.
So I think it's important for companies to share with their employees, you know, what their initiative is going to be who's working on it, what the purpose is, I think, to the extent you can sort of reassure employees, this is not an effort to replace them in any way. But this is, whether it's to improve efficiency or another mission that should be explained, that should be explained to employees clearly.
I think it's also very important for employers not to be sloppy with communications about how this could potentially replace employees. And whether that's, that's not going to come from an official company communication. But I think managers should be trained on how to communicate with their employees about this, and the manager should know how to speak responsibly about it. And it should not be an effort that's accompanied by threats or jokes about people's jobs.
So good communication, helps everything, and thorough communication doesn't just have to be about good developments, it could be about difficult issues. were ones that are potentially worrisome to employees, but the fact that you were talking with your employees about it, and you're providing them with a forum to come speak with you and ask questions, and you're transparent about developments, that goes a long way, in sort of building trust and faith in whatever the initiative is.
Can AI Improve the Workplace?
Siobhan: So in a best case scenario world, and this is again, we're going back to speculation here, but I want to end on a positive note, if possible, best case scenario world, how do you see these tools improving the workplace?
Peter: Well, I mean, I don't want to be the typical lawyer who shoots down every idea. I think there could be some wonderful uses for AI in the workplace. And certainly, if there's ways to help employees do their jobs more efficiently or better, or for from different locations, or gives employees access to more information than they currently have at their disposal, where more resources to help them do their jobs, I think those things would be welcome.
And I think there's a lot of communications that don't happen in companies that could be improved with the use of, you know AI platforms. I mean, even on basic things, like asking questions about benefits, or asking questions about company policies, I mean, there's plenty of internal facing communications and functions that could be improved.
But also, I think, if it's being used in a way to improve the employee experience, improve the customer experience, then I think it's obviously going to be positive all around.
If it's being used solely as a tool for replacing employees. I think that's obviously that's not gonna be a fun place to work. Probably not a lot of job security for those who remain.
Siobhan: Yeah, I want to ask one last question. Because I'm curious for companies that are introducing this, you said potentially one person or a group of people should do oversight of the AI in the workplace. Which part of the company should that come from? Do you think that would be legal? Would it be a cross-functional team? Where do you see that sitting in an org?
Peter: I think it would have to be cross-functional. And to be clear, it's not just oversight, it's serving as a point people for communication and questions and etc. But there are legal aspects that are important. There are obviously technology aspects that are important, there are HR aspects that are potentially touch. So I would ideally see a cross-functional team overseeing this if I were the boss.
Siobhan: Well, you are.
Peter: Yeah, that's true. I am.
Siobhan: So Peter, thank you so much for joining me. Is there anything that we didn't touch on that you wished I had raised?
Peter: No. I mean, there's so much we could talk about and we'll see where this goes. I'm just hoping you know, what we say doesn't get rewritten or overrun in the next couple of weeks, because the developments are coming so fast and furious. If I need to come back on I'm available.
Siobhan: Exactly. We'll have to do it in real time. Well, if anybody wants to find you online, where's the best place for them to look?
Peter: There's two places they could look my website, which is the therahbargroup.com Or my LinkedIn page, which has all of my thoughts and musings and other things.
Siobhan: Excellent. Well, thank you so much for jumping in on this conversation with me, Peter. You know, we're all working it out together. So I appreciate you helping me muddle through.
Peter: I really enjoyed it. Thank you so much for having me.
Siobhan: If you have a suggestion or a topic for a future conversation, I'm all ears. Please drop me a line at [email protected]. Additionally, if you liked what you heard, post a review on Apple Podcasts or wherever you may be listening. Please share Get Reworked with anyone you think might benefit from these types of conversations. Find us at reworked.co. And finally, follow us at Get Reworked on Twitter as well. Thank you again for exploring the revolution of work with me, and I'll see you next time.
About the Author
Siobhan is the editor in chief of Reworked, where she leads the site's content strategy, with a focus on the transformation of the workplace. Prior to joining Reworked, Siobhan was managing editor of Reworked's sister site, CMSWire, where she directed day-to-day operations as well as cultivated and built its contributor community.
Connect with Siobhan Fagan: