SHRM All Things Work

Merve Hickok on Using AI Responsibly at Work

Episode Summary

In this episode of All Things Work, host Tony Lee is joined by Merve Hickok, founder of AI Ethicist.org, a website focused on the ethically responsible development and governance of AI, to discuss how organizations can ensure that they use AI both responsibly and ethically in their business operations.

Episode Notes

With its promises of making work both easier and more efficient, adoption and implementation of AI continues to expand across every industry. Likewise, AI is now also increasingly relied on by hiring/talent acquisition professionals to help solve continuing talent shortages. However, reports indicate AI is producing biases in hiring and other problematic outcomes at work. In this episode of All Things Work, host Tony Lee is joined by Merve Hickok, founder of AI Ethicist.org, a website focused on ethically responsible development and governance of AI, to discuss how organizations can ensure they use AI both responsibly and ethically in their business operations.

Follow All Things Work wherever you listen to podcasts; rate and review on Apple Podcasts.

This episode of All Things Work is sponsored by ADP.

Music courtesy of bensound.

Episode transcript

Episode Transcription

Speaker 1:

Business success requires thinking beyond today. That's why ADP uses data driven insights to design HR solutions to help your business have more success tomorrow. ADP. Always designing for HR talent, time benefits, payroll and people.

Tony Lee:

Welcome to All Things Work a podcast from the Society for Human Resource Management. I'm your host, Tony Lee, Head of Content here at SHRM. Thank you for joining us. All Things Work as an audio adventure, where we talk with thought leaders and taste makers to bring you an insider's perspective on All Things Work.

Today, we'll be exploring the use of artificial intelligence in the workplace and whether employers and the technology vendors they rely on, are using those AI tools responsibly. A growing wave of regulation could force organizations that develop and use AI to implement what's called responsible use policies, and employers that don't already have a process for ensuring the responsible use of AI need to create one. My guest today, joining me to discuss the use of AI in the workplace, is Merve Hickok. Merve is the founder of AI Ethicist.org, a website focused on the ethical and responsible development and governance of AI, and is a lecture at the University of Michigan in Ann Arbor. Merve, welcome to All Things Work.

Merve Hickok:

Thank you so much. Thanks for having me.

Tony Lee:

Well, no, it's our pleasure to have you here. So let's start with, you know, there's a new survey of executives that found that nearly 92% said there are increasing investments in AI, but only 44% said their organization had established policies to support AI responsibly. Now, why do you think there's that disconnect?

Merve Hickok:

I think there's a lot of hype about how AI systems can help your business do things better, more efficiently, at cheaper cost. When you look at the market predictions and survey, the market growth numbers are in trillions. So a lot of companies feel like they might be lagging, missing a huge opportunity and decide to invest on AI, whether they're developing themselves or deploying AI systems. It has become this [inaudible 00:02:15] thing that promises to solve all your problems, right? But most organizations don't really understand fully what that means for the organization, their employees, customers, in the context of HR, their candidates.

So investing in AI or deploying AI is the easiest part of the journey. The harder part is designing and deploying it in responsible ways, making it work for all the other parts in your organization. So some of it is, you don't know what you don't know. So you come into this, you're looking to invest, and you are at the beginning of your journey. And I do a lot of consulting and training to private organizations and starting at sea level and board level. A lot of the time, once we start talking it's oh, didn't realize this, oh, we didn't know about this, or we need to look further into this. So some of it is that investment that it requires and looking at it holistically for your organization because this is not a one off thing. You need to understand the risks and harms this product might create or amplify. Do you have the means to find out if you have certain harms or risk or biases? Do you have the means to govern? Are you okay with those risks and harms? Who gets to decide for different stakeholders? Who's accountable?

Like I said, one part is you don't know what you don't know, and then you might start investing into responsible AI as well. I also see a lot of companies, unfortunately, who pay less attention to it because they equate ethics with compliance. And since the regulation and policy on AI is still a new field, is still growing, these companies think that they don't have to do anything yet.

Tony Lee:

So when you look at AI adoption, I mean, it's been especially high in recruitment and hiring. The talent shortage has made that such a priority. But I often hear criticism and we also report on criticism, that AI is leading to bias decisions in the hiring process. So I guess the question is, how can companies make sure that the AI products they're using aren't bias? And to put a fine point on it, New York city is one of many cities actually that have passed laws regulating that use. So, how can we make sure that those products aren't creating bias decisions?

Merve Hickok:

That's a great question. There's such a wide spectrum of products out there. Some that are really, really good and include what they promise to do. And some of them are just like literally snake oil. I think the work for the companies, especially for employers, the work starts before you even start using the product. It's crucial, and I cannot stress this enough, that companies first establish their needs and objectives and then try to find the vendors which might provide solutions towards those needs.

A lot of what's happening right now is the other way around. You have vendor marketing, their product as the solution to all your HR problems, better, faster, cheaper. It looks great, promises to bring new insights, reach more diverse candidates, less biased process, get better hires, et cetera, and you can do it on scale. You feel like you need this product. What is important for employers is doing in depth due diligence on these products before they make decision, looking at building the capacity internally to ask the right questions for due diligence or get independent, external help.

But whatever you do, make sure you understand how these products work. What are their potential benefits as well as their limitations? Does it really do what it says it does? How will that affect your candidate experience? How will your team understand and use this product? And what does that mean for the whole hiring pipeline and process? You're bringing this product and plugging in somewhere in the process, but what are your controls and safeguards with that?

So long story short, there's a lot that the employers can do, but I would say, start with your objectives first. Build capacity to ask the right questions at the beginning, and as long as you're using the system, it becomes like this life cycle governance. And ensure that you have those controls at your end as well, so you're not solely dependent on what the vendor is saying about the product.

Tony Lee:

Right. So let's talk more about vendors because HR professionals are doing their job every day, and they tend not to be HR technology experts, they rely on their vendors for that. I hear from HR professionals who say that they want to regulate how AI's being used. They want to make sure it's not biased and that it's being used responsibly, but that they'll read materials from their HRIS provider or their ATS provider or another vendor that talks about how, oh, well, we've got AI to the product and now it's better. And they have no idea what was added and how it's better, right? So, how can they make sure it's being used responsibly if they're not communicating or their vendor is not communicating with them?

Merve Hickok:

I think the responsible use is about ensuring that your actions are for more efficiency and less [inaudible 00:07:35] do not end up harming others. It's about going beyond compliance and investing capacity building to ask the questions. I keep underlining that, but it is for a reason. And both on the vendor side, as well as the client side on the employer side, it's about giving people the space to raise concerns, establishing the trust that what they flag as concern will not hurt them later. It's aligning your practices with organization's values and then walking the talk. Consider this in developing the product for example, and your project team, your developers, are saying, hey, this is going to impact this group negatively. And you might say, I'm okay with that. Or, we need to hit the market as quickly as possible, et cetera, we can deal with that later. Or you might actually sit down and discuss this, mitigate those risks, et cetera.

Same thing on the employer side, you might realize your employees, some of your employees might realize after reading these materials, coming into contact with their HRIS et cetera, that, hey, we got some concerns instead of just pushing and go, go, go. It's about slowing down and understanding that responsible development user for sustainability and resiliency of your business as well.

Tony Lee:

You know, the converse is true as well, because you've got a lot of vendors who say, we're leveraging AI and we're doing it for your benefit. And then, when pushed a little by HR to say, well, show me how you're leveraging AI, it turns out it's not really AI. I'm not sure what it is, but it's not truly AI. So I guess your advice on just being on top of what your vendor's doing and asking good questions is the best way to ferret all of that out, right?

Merve Hickok:

It does. And I see what you just mentioned a lot. Very, very often, unfortunately. These systems are still relatively new in the field and the knowledge about them has not become mainstream yet. And to your point, a lot of practitioners, and this is not for HR by the way, only for HR, I see this across different industries. When you say AI, AI systems, deep learning, et cetera, they're intimidated by that technology and feel like if they're not computer engineers or developers, that they couldn't possibly ask the right questions.

So you have this vendor who leverages AI says, hey, we're using AI, but cannot tell you how it works in too much detail because it's proprietary information. Then it becomes really hard for smaller clients to figure out what's actually happening. Or vice versa, that's not using AI, but it says it does, might be just like simple data analytics, decision trees, or keyboard matches, et cetera. And they're just riding that hype. But what's important, especially for the employer side clients is, ultimately they carry most of the risk on this.

So they definitely need to use their strength to establish better governance and communication structures with those vendors. And that kind of relationship raises the bar for everyone, right? While you're waiting for legislation requiring better oversight and transparency, it really, really comes down to this kind of relationship and responsible approaches from both sides.

Tony Lee:

Now, how important is it for HR professionals to know how the AI is working? For example, should HR be able to explain how an algorithmic decision is being made? I mean, is that just good practice or is that actually asking for more than what's reasonable?

Merve Hickok:

I don't think it's just good practice. I think it's absolutely crucial for the organization. If you don't understand what kind of risk some of these systems bring and what that means for your organization, then you're in serious trouble. And again, I don't just say this for HR professionals. This is really business practice. You know, it's a holistic approach. We look at a lot about HR being the strategical partner, being at the table with business functions, contributing towards organization's strategy. So understanding algorithmic decisions is one of those areas where HR absolutely needs to focus as a strategic business partner.

Who are you hiring into the organization? Who are you not hiring? How does that change your organizational culture and workplace environments? How do you make decisions about who learns, what, who gets promoted? How much does it cost you when you make wrong decisions? Are you setting your organization for success or slowly paving the way for implosion? If you're losing candidates who are also your clients, how is that impacting your organization's bottom line in terms of its profits, et cetera. Not only the cost of it, but you are also impacting it on the business side.

So if you don't understand how these decisions are made, how will you ensure the resiliency of your organization? So I will take it further than good practice and say absolutely necessary.

Tony Lee:

Yeah, that makes perfect sense. All right, so let's take it one more step then. So it sounds like employers need to have some type of AI risk and governance strategy and structure in place, including identifying the person whose responsible for it. So, I think the challenge there for a lot of HR professionals is that AI touches so many different departments. I mean, from technology to risk compliance, contracts, data privacy, obviously recruitment, hiring to name a few, I think a lot of organizations then struggle over where that governance responsibility belongs. What would you advise?

Merve Hickok:

I think that's the beauty of it. I would take that as something good about AI, that multidisciplinary efforts that responsible AI and governance requires. And I guess it goes for both its development and it's use. It cannot be a single person or a single department's job. For example, if you're just beginning to use the systems, you should definitely have a dedicated role who would try the governance strategy and training, et cetera, across your organization. But you definitely need to involve all departments with clear cut responsibilities to govern the systems properly. You point out to a great number of stakeholders that should understand these systems, be enrolled in the conversations, flag their concerns, but also understand the overall implications for the organization.

The thing with AI, as opposed to other technologies or other things that you might be bringing into the organization is, you change the nature of how things are done. Some practices get reshaped around the capabilities of the technology or what the technology provides as outcomes. So across all these departments, to look at how the systems interact with their environments and different users, candidates, employees, or those [inaudible 00:14:56] governance, et cetera, and then see how this shape new ways of doing things.

For example, if you have brought an AI system to predict performance, how do people understand the system? How do they change their behavior according to the system? What does the system, what that AI powered performance management system incentivize? And if people are changing their behavior around that system, what does that mean for your business and the governance that you built around it? So it's definitely multidisciplinary. It is an ongoing thing. It's not a one off and it definitely takes intentional effort.

Tony Lee:

Yeah. But it does sound like you're suggesting something like an AI steering committee, as opposed to putting it all on the shoulders of one person.

Merve Hickok:

Absolutely. Absolutely. Like I said, that dedicated person is crucial. If this hasn't been established across your organization in different departments, you do want to bring in someone like a chief ethics officer who would execute the decided strategy on AI governance, get the organization training going on, build relationships internally and external with different stakeholders, et cetera. You want a dedicated role to get things rolling, but ultimately what you want is established committees, like yourself, who have multi disciplines, who are specifically looking at how the systems are working or not working for the organization.

Tony Lee:

So we touched on regulations. HR tends to be very compliance oriented, we want to make sure that we're not doing anything that gets the company in trouble, right? Whether it's with a federal or a state or a local entity or in any legal way. So what should HR expect to see next in terms of AI regulations? I mean, do you think we have a lot more coming down the pike or are we as about as advanced as we're going to get in the near future?

Merve Hickok:

No, I definitely think so. One of my hats, one of the hats I'm wearing, is I'm the also Research Director at Center for AI and Digital Policy. And because of my HR background, I'm extra interested about the policy and regulation changes around the world, as well as US. You mentioned New York City regulation. It has a lot of vague definitions. It misses certain things that HR professionals or companies should look for, but it's definitely in the right direction with its audit requirements and transparency, et cetera.

We are seeing this, you know, we already have Illinois Biometric Information Act and there is another one that is coming out of Washington, DC by the AG that is going to be similar to New York City one, wider than HR, but definitely including HR practices. European AI act in its draft regulation requires HR software, HR vendors, to go through a number of obligations and to conform as assessments, document their practices, and submit those to the European database.

So for example, if you're a global company who are providing services as a vendor in Europe, you have to go through that. Now, whether you provide that information to your clients outside Europe is another thing. Or if you are, for example, global company hiring into New York or DC or California, Illinois, there are things that you need to be careful of. And on top of that, we see EOC and FDC looking at different angles of these softwares. EOC has couple of initiatives focusing in AI and trying to expand that, providing guidelines and guidance to employers as well as vendors. So they're trying to expand their [inaudible 00:19:03] as well. So there's definitely more in the pipeline for sure.

Tony Lee:

Absolutely. And it sounds like if HR is going to stay on top of it, they should probably talk to not only their vendors, but also their employment attorneys and make sure they know what's coming down the pike.

That's going to do it for today's episode of All Things Work. A big thank you to Merve Hickok for joining me to discuss the evolving use and oversight of AI in the workplace. And before we get out of here, I want to encourage everyone to follow All Things Work wherever you listen to your podcast. And also, listener reviews have a real impact on podcast visibility. So if you enjoyed today's episode, please take a moment to leave a review and help others find the show. Finally, you can find all of our episodes on our website at SHRM.org/ATW podcasts. Thanks for listening, and we'll catch you next time on All Things Work.

Speaker 1:

Business success requires thinking beyond today. That's why ADP uses data-driven insights to design HR solutions to help your business have more success tomorrow. ADP. Always designing for HR talent, time benefits, payroll and people.