SHRM All Things Work

Avi Gesser on How ChatGPT Can Best Serve Workers

Episode Summary

Advocates of AI chatbots, such as ChatGPT, point to their potential to boost employee efficiency and productivity, but skeptics express concern about their reliability and accuracy. In this episode of All Things Work, host Tony Lee speaks with Avi Gesser, partner at Debevoise & Plimpton and co-chair of the firm’s Cybersecurity, Privacy and Artificial Intelligence Practice Group, about where AI tools excel and where they falter.

Episode Notes

Advocates of AI chatbots, such as ChatGPT, point to their potential to boost employee efficiency and productivity, but skeptics express concern about their reliability and accuracy. In this episode of All Things Work, host Tony Lee speaks with Avi Gesser, partner at Debevoise & Plimpton and co-chair of the firm’s Cybersecurity, Privacy and Artificial Intelligence Practice Group, about where AI tools excel and where they falter.

This episode of All Things Work is sponsored by UKG.

Episode transcript

Episode Transcription

Speaker 1: This episode is sponsored by UKG. UKG offers HR, payroll and workforce management solutions that support your employees to make your fairytale workplace a reality.

Tony Lee: Welcome to All Things Work, a podcast from the Society for Human Resource Management. I'm your host, Tony Lee, head of content here at SHRM. Thank you for joining us. All Things Work as an audio adventure where we talk with thought leaders and taste makers to bring you an insiders' perspective on All Things Work. Today we're discussing generative AI and ChatGPT, which uses artificial intelligence to comb through vast data sets to create content and have human-like conversations with users. Of course, ChatGPT is only as good as the data it's been given and the work produced by ChatGPT and other generative AI tools can be incorrect, biased, racist, proprietary, or copyrighted. While many employers are adopting the technology in their workplaces, others are calling for a pause in its development. Joining us today to talk about this issue is Avi Gasser, a partner at the law firm Debevoise in Plympton in New York. Avi is co-chairman of the firm's data strategy and security group, which focuses on cybersecurity, privacy and AI practices. Avi, welcome to All Things Work.

Avi Gesser: Thanks for having me. Great to be here.

Tony Lee: Well, we really appreciate you taking the time. Well, let's start by defining the issue because this is fairly new for a lot of folks. So advocates of AI usage in the workplace have been pointing to case studies and research showing that ChatGPT can boost employee efficiency and productivity pretty dramatically. Now, on the other hand, detractors have said that since ChatGPT works on probabilities and it models its output on whatever content it uncovers on the internet, it learns to become, quote unquote, "an extremely sophisticated BS artist," and that much of what it provides isn't correct or accurate. So given these competing views, how should employers view ChatGPT usage in the workplace?

Avi Gesser: It's a great question. I think ChatGPT and other generative AI tools are tools, right? So they're not magic, and I think there are things that they're really good at and they're things that they're not good at. We've worked with a lot of clients who have found low risk, high value use cases, but it often takes some time and some trial and error to get there. And just because they're good at one thing, it doesn't mean that they're good at other things. And I think it's helpful to think of these tools like magicians. They can do amazing tricks, but you don't want to be fooled into thinking that they can do actual magic. They're not wizards. And so just because they can pull a rabbit out of a hat, it doesn't mean they can also pull a gopher out of a hat. And so you have to play around with them to find out which tricks they're really good at and be able to use those tricks for what you want to use them for, but also to figure out what they're not good at.

Tony Lee: Right. And we're going to talk in a bit about creating policies around them, but some of the low-hanging fruit that employers seem to be having success with are in customer service with chatbots. I've heard some say that they're great for collections, going after accounts do, any other low-hanging fruit where ChatGPT seems to really be making a difference pretty quickly.

Avi Gesser: So it's excellent at translating, summarizing, cleaning up grammar. I use it when I've got a very complicated long article that someone wants me to read that I don't have time to read and just have it give me the five important bullet point takeaways. If I've got a very long complicated run on paragraph that I want to clean up, it's very good at that. It's good for tech support, right? If you have a problem and you can't figure out how to do something in a piece of software, it's very good for that. Just brainstorming, just idea generating first draft of speeches, first draft of reference letters, job postings, anything where there's a lot of content on the internet that it could be trained on, that is somewhat routine where you want a first draft and then you are going to build on it, and you know where it's missing stuff or where it might get stuff wrong and you can fix that and build on top of what it's put forward. Those are all very good use cases.

Tony Lee: Okay, great examples. And one other thing we should probably say, because we forget that this is not even a year old, less than that, so many people are still getting caught up. The idea of how ChatGPT gets smarter, it's a model that can be trained by giving it links to articles that it should read, links to examples of what you're looking for. Any other thoughts on how ChatGPT gets smarter in an employer environment?

Avi Gesser: Well, I think you have to remember that it's not iterative. And so normally when you talk to a human and you say, "Draft me a press release for a data breach," right? A PR firm would ask you, "Okay, what's the company? What data was taken? Where are you located? What industry are you in? Who's the target audience?" Really a whole bunch of follow-up questions. These tools won't do that. And so to get better results out of them, you have to anticipate all the follow-up questions you'd normally get from a human and put them into the initial query. So part of it is about learning what is called prompt architecture, which is developing very detailed prompts with all the caveats and details that you want all in the first draft, the first question, rather than how you would normally develop ideas and iterate with a human.

Tony Lee: Yeah, good advice. All right. So let's focus on compliance a bit. So very recently, four federal agencies, the EEOC, the Department of Justice, the Consumer Financial Protection Bureau, and the Federal Trade Commission. They held a press conference highlighting their commitment to enforcing existing civil rights and consumer protection laws as they apply to AI in the workplace. So what does this mean that these agencies are taking this seriously this quickly?

Avi Gesser: Well, I think it means that there are concerns about this technology that's being widely adopted very quickly, and whenever that happens there's risks. And I think they want to signal that just because there isn't generative AI laws on the books, it doesn't mean that there's no legal risk in how they're used. And so existing rules are going to apply to the use of these tools in certain contexts. So if you lie to customers about what these tools can do, you oversell or you lie to customers and say, "This work was done by one of our expert humans and it was actually created by a generative AI tool." Or if you use these tools in a way that discriminates against protected classes, there are going to be laws already on the books that might apply to those situations. And so people need to be aware of that.

Tony Lee: Right. Okay. So we've heard of some companies, maybe more smaller, more high-tech type companies that are basically saying to employees, experiment, use ChatGPT and generative AI anywhere you like, let all the flowers bloom, right? But the question is, should they put a hold on those efforts until they develop a policy on the usage? And what are the legal risks if they don't?

Avi Gesser: Well, so like we were just talking about existing laws apply to these tools, even if there aren't laws around that specifically apply to them, I think companies already have lots of policies that apply to generative AI tools, even if they're not specifically directed at generative AI tools. So confidentiality of client information, being honest with our customers, all these things are going to apply. So some companies are just providing employees with a reminder about existing policies and how they might apply to generative AI tools. Some companies are banning the use of open versions of the tool, so versions where individuals are signing up in their personal capacity because of some of the risks associated with confidentiality and privacy, but what data gets shared with the provider and how the provider might use that data for future training.

And those companies are pursuing licensed versions of the software enterprise-wide versions where the contract is not with the individuals, but with the enterprise, and they have more control over what happens to the data that's input and confidentiality over that data and it not being used for training sets, but it really depends on the company, what it does, the culture, the size, its risk tolerance and so forth.

I think allowing all employees to use any generative AI tool for any work related task does carry some significant risk. And that's confidentiality risk, privacy risk, it's contractual risks, cyber risk, intellectual property risk, transparency risk, quality control risks. There's a lot of risks, and I think it's hard to give thoughts about how those risks apply without knowing the specific use case and the specific company and so forth. And I think what's been surprising to me is just how varied the policies are, and they range from, everybody can use them for anything, to nobody can use them for anything.

Tony Lee: Wow. So we're all feeling our way here, but you have come up with three steps that you would recommend to companies when they're trying to create an AI policy for employees to follow. So if you don't mind, let's walk through those three. So number one, you talk about outlining prohibited uses. Can you share more about what that would include?

Avi Gesser: Sure. So I think everyone would agree that prohibited uses should include anything that violates company policy, anything that's illegal, anything that would be highly just reputable to the company using it for generating phishing emails, for example, or defrauding people, anything like that. And I think then to the extent you are using a version that doesn't have strict confidentiality controls in place, you want to avoid using the tool in a way that inputs very confidential company information or client information, or requires people to change the settings opt-outs in such a way as to limit that confidentiality risk. And then you've got to stay away from uses that impact or could be caught by other regulatory regimes. So for example, in New York, there's going to be a rule that's going to come into place in July that limits your ability to use AI tools for hiring decisions or promotion decisions. So you want to stay away from using these tools for those kinds of decisions because they would then be subject to this other regulatory regime, which has a lot of onerous requirements that these tools probably wouldn't be able to comply with.

Tony Lee: Okay. All right. So we've defined what's prohibited. Your step two is defining what's not prohibited, basically describing what the company is comfortable allowing employees to do, right?

Avi Gesser: Sure. And so that could be everything else. So you could just have two buckets, right? There is what's specifically prohibited, and then everything else is allowed. And that could be with an illustration of examples. So some of the things we talked about and translating to a non-sensitive documents, summarizing documents, IT support, generating an ideas, things like that. Or that could be an exhaustive list. So it could just be these are the prohibited uses and these are the allowed uses, which would then mean there's some other bucket in between that is presumptively not allowed, but people could apply to have those undefined use cases permitted. And then you'd need some mechanism by which you would test those use cases, evaluate them, and then either put them on the allowed list or put them on the prohibited list.

Tony Lee: Right. And that you define as step three, and you actually suggest that perhaps there's some type of council or group that evaluates these things for employees, right?

Avi Gesser: Yeah. And that's a complicated process. You're balancing the short-term and long-term benefit to the business from a particular use case against the downside risks, which what we mentioned before, privacy and confidentiality and IP risks and so forth. And so whether that's one person or a group of people, I think you want to pick the right person for that or the right group, because that's going to be a controversial decision I think in some circumstances where there's a lot of factors to consider, and there'll be people who'll be disappointed with certain decisions. And so you want to have a good process for that.

Tony Lee: Well, and at most companies HR owns employee communications and typically employee policies. But again, this is so new. Are you seeing examples of, is it HR, is it legal, is IT? Who is making these calls?

Avi Gesser: I think there's no one right way to do it. It's a little bit dependent on the person and whether they're flexible and thoughtful and have the bandwidth to do this and have the right place in the organization to carry it out. But we've seen committees that have all those people and more. I think the more voices that opine on these issues, the better the process and the better the results and the more credibility the decision making has, and therefore the more likely it's going to be effective. But certainly HR is a big part of it.

Tony Lee: Yeah. So you mentioned the New York law on limiting the use of AI and recruiting. Is this the tip of the iceberg? Do you think we're going to see more states and maybe localities getting involved in legislating how AI can be used in employment?

Avi Gesser: Yeah, and I think we're already seeing it. I'm not a huge fan of the New York AI hiring law, at least as it's drafted now. I think that testing regime is too onerous and prescriptive, but the idea of regulating AI in hiring, in the use of facial recognition, in lending an insurance, I think that's all coming, and that's unfortunately not going to be harmonized at a federal, national level the way it looks like it's going to happen in Europe. And that's going to be as patchwork of state law and in some cases municipal law and maybe eventually some federal law as well. But it's going to be messy.

Tony Lee: And you make reference to it, that some European countries have just basically outlawed the use of ChatGPT, correct? Do you foresee the rest of the globe taking a stricter approach to this?

Avi Gesser: I think Italy has now reversed that decision, at least partially, but that... for sure, you're going to see a lot of movement back and forth where the tools will be augmented to meet regulatory requirements, and then the regulators will reevaluate and make decisions. And there may be events that happen that result in immediate bands for some period of time to assess risk. And so I think this is going to be a fairly chaotic process for a while. And I think the reflection that you see in Europe is how focused the European regulators are on privacy issues and the privacy risk that these tools provide. Whereas I think in the United States, you may see the focus more on a cybersecurity risks posed by some of these tools. And different countries will have different priorities that they care about. And so to the extent that the tools demonstrate a weakness in that particular priority, you may see some very aggressive action, at least in short term before the companies are able to address the regulator's concerns.

Tony Lee: Yeah, a lot of trial and error. Let's focus for a second on HR specifically. We've seen examples of HR leveraging AI to handle fairly some basic tasks, the writing job descriptions, setting up chatbots to answer employee questions. Does that seem like a good use? Is there any downside to that? Do you see it expanding?

Avi Gesser: Those are good uses. And the one obvious downside risk is quality control. People need to be able to know when the output from the tool has a mistake in it or is missing something that's important. But I think the less obvious downside risk is the loss of training opportunities. Sometimes to get a really good document it's better to start from scratch than it is to start with an okay document, and that people learn a lot from the drafting process. And so I think you have to decide for yourself is the value, the speed, the efficiency worth it if what we're losing is an opportunity for people to get better at writing and editing and the thinking process and the critical analysis process that comes from drafting. And that is really going to depend a lot on what you're drafting and who's drafting it and what your business is.

Tony Lee: It's such a great point, and it's a point that's raised in education, being able to create a term paper not from scratch, but from drafts. It's not the same learning experience. So education and many others are wrestling with this too, I imagine.

Avi Gesser: Yeah, and I think some businesses are very much about quality, and I think these products produce good quality, but not necessarily great quality outputs. And so you have to balance between speed and cost and efficiency and quality. And I think different businesses are going to make different choices. And there's the short-term value and the long-term value. And so if long-term the impact is going to be that you're not going to have people at your company who are very good at drafting and writing and so forth, because so much of it is being done by these tools, you just have to decide for yourself whether that's a good business strategy for you.

Tony Lee: Well, that is going to do it for today's episode of All Things Work. A big thank you to Avi Gesser for sharing his insights on ChatGPT and the future of AI in the workplace. Before we get out of here, I want to encourage everyone to follow All Things Work, wherever you listen to podcasts, and also listener reviews have a real impact on a podcast visibility. So if you enjoyed today's episode, please take a moment to leave a review and help others find the show. Finally, you can find all of our episodes on our website at shrm.org/podcast. Thanks for listening, and we'll catch you next time on All Things Work.

Speaker 1: Every leader wants their employees to live and work happily ever after. Thankfully, you don't need a magic wand or a fairy godmother to make that dream come true. HR, payroll and workforce management solutions from UKG give you the tools you need to support and celebrate all of your people. Now you can make your fairytale workplace a reality with UKG.