Q&A: Mercer’s Jason Averbook on HR’s new role with AI 

Digital HR strategist Jason Averbook says talent leaders who experiment with AI will help their organizations gain a huge advantage over those that wait.

Artificial intelligence is reshaping the workplace in many ways. While providing benefits and opportunities for businesses to grow, AI has been raising uncertainties among employees such as fairness in work and job security. To alleviate those stresses, talent leaders play a necessary role in creating psychological safety and trust in the workplace amidst discourse surrounding AI. Jason Averbook, senior partner and global leader in digital HR strategy at Mercer, discusses how HR, talent leaders and employees can effectively navigate the workplace as AI is continuously implemented. 

Q. What do you predict the impact of AI will be on HR moving forward? 

Averbook: AI will have the biggest or the largest impact on HR ever, from a technology standpoint. You could argue that the computer, quote — unquote, had a larger impact, but I truly believe that AI will: a) change the function, b) change the role of people in the function and c) change the way that the function has an opportunity to recast itself to be focused on the employee. AI and generative AI will change HR, more than any technology innovation ever has.

Q. What is the importance of psychological safety in the workplace, and what is HR’s role in creating psychological safety while implementing AI? 

Averbook: HR’s job, along with managers and executives for the company, is to make sure that the workplace has a culture of trust. When we think about a culture of trust, trust is designed specifically and executed with safety in mind. When it comes to trust, the role of HR, and everything that it does, and the role of business leaders, and everything that they do, is to create a workplace where employees feel that they can trust their employer. I wouldn’t necessarily use that term, “psychological safety.” I would use the term trust. To me, every time we start to use the word safety, it’s very — this is gonna sound stupid — but it’s very dangerous. You know, “Am I safe at work?” And I think that right now, there’s a little bit of an artificial hype over the fact that artificial intelligence is somehow not going to keep you safe. It’s very important to understand that artificial intelligence done right is — and I say done right on purpose — is the safest thing in the world. Artificial intelligence done not right or done in a sloppy way, which includes data sources that are incorrect, data sources that are not validated and trusted, has a huge opportunity to create problems, just like copying machines did when we use them the wrong way and phones did when we use them the wrong way. So, I think it’s really important that HR doesn’t not focus on AI, because they’re worried about safety. Just like any powerful tool, it has to be used with governance in place.

Q. What communication strategies can talent leaders use when discussing the use of AI with employees?

Averbook: The best use of AI within organizations is the use of AI that benefits and provides tools and resources for employees to get their jobs done behind the scenes. As far as how questions are answered, the best communication for a leader to make to employ about AI would be no communication. AI should be embedded in. We don’t communicate. Just imagine this for a second. Imagine if I call the call center in India to get an answer to a question. And the person said, “I just have to warn you, I’ve only been here two weeks.” It would be like an example of every time I’m using AI saying, “Hey, I have to warn you. This is AI.” It’s important to be transparent about how employees and managers are served, but as far as like disclaimers, the concept of trying to do change management around AI, you want to change the way work gets done, change the way that the accuracy of answers, you don’t want to have to train people on “hey, guess what now you have to use AI?”

Q. To what degree can AI emulate emotional intelligence and how can talent leaders support employees as they navigate this new terrain?

Averbook: AI has an amazing opportunity to serve as triage to employees and managers that humans can’t. So I can quickly ask a series of two or three questions to an employee or to a manager group and understand that these employees — quote unquote — are not having a good day, not mentally well, not enjoying their job, whatever the pattern is, that I’m — quote unquote — looking for, whatever the challenge is that I’m looking for. And then, be able to take that data and deliver that to a human, so the human could actually do that emotional work that you’re talking about. When you think about artificial intelligence, it’s really broken down into three different types of work: “hands work,” “heads work” and “hearts work.” Artificial intelligence is really good at “hands work,” you know, pushing out a lot of surveys. It’s good at “heads work,” and being able to look at data and be able to notify humans of potential problems, challenges, etc. But when it comes to “hearts work,” that’s where we need people or humans in the loop involved. So it’s really, really important that we don’t think that artificial intelligence is going to provide emotional support. Artificial Intelligence is going to provide alerts that emotional support is needed by a human in XYZ scenario.

Q. What barriers will talent leaders likely face when addressing AI in the workplace and how can they overcome these barriers?

Averbook: The first thing is to see AI tools as shiny objects. I use this term called shiny object syndrome — SOS — where everyone gets excited about a new piece of technology. They think they can just put in a new piece of technology and then automatically fix problems. Technology doesn’t fix problems; people fix problems by leveraging tools that are going to fix problems. So the biggest fear I have is that people are going to go out and say, “Hey, guess what? I want to put a piece of artificial intelligence technology on top of that data.” And putting AI on top of bad data, you get bad answers and you get a bunch of people that don’t trust. And you know what you get? You get people that are never going to use the tools again. So the biggest fear or biggest obstacle is making sure that organizations are ready, and that they’ve got the data fitness. And they’re ready to activate artificial intelligence in a way that the organization can benefit from it instead of just buying a piece of technology called artificial intelligence and thinking that it’s going to magically change HR, because it won’t. Everything that I said about the potential of AI will be squandered if all we do is focus on the technology.

Q. How can talent leaders nurture a culture of adaptability when it comes to implementing AI in the workplace? 

Averbook: So the word adaptable means unlearn. And if I’m thinking about unlearning, you know, basically, what I need to do is go through intentional design, as to where do I embed AI into how people work. So to build a culture of unlearning, it means that I’m making something easier, or making something more valuable to the employee, where they are willing to unlearn and do something in a different way. That’s the biggest reason why these tools fail — because I don’t actually design them with the employee and manager in mind — I design them with me as HR in mind, and then hope that the employees somehow fall in line. The employees aren’t going to fall in line; employees are not going to be adaptable if they don’t get value. It’s very important that AI is infused in the way people work in a way that adds value to how they work. Value is the most important lever in driving unlearning. Most people don’t like to unlearn unless there’s something in it for them.

Q. What final piece of advice would you give to talent leaders and HR professionals as AI becomes more ingrained in workplace practices and cultures?

Averbook: Start experimenting. Organizations that experiment and organizations that start to learn about what true artificial intelligence is, and what generative AI is, will truly have a huge advantage over those organizations that sit and wait for someone to come to them and say, “Hey, guess what? Now it’s time to start implementing AI.” Every human that’s in HR today should be actively experimenting with and learning what generative AI is and how it’s going to change their role, but most importantly, how it’s going to help them be a better resource for humans going forward.