“As we implement new AI tools in operations and customer service, I’ve noticed that some of my team members seem reluctant to accept the changes that are coming down. We’ve done our best to develop and implement some new training, but what else can we do?”
With the rapid changes surrounding the implementation of AI and worries about the future of their jobs, it’s no wonder that some team members feel unenthused or resistant. Changes like these can leave people feeling stressed, worried, confused, and/or overwhelmed. And, in some cases, people will reevaluate or change their careers entirely.
Training is another matter. The race to adopt and provide training on these new AI tools continues at an unrelenting pace which many employees find overwhelming and exhausting. A recent piece in Forbes pointed to “AI fatigue” — burnout caused by the constant pressure to learn and utilize new AI tools.
Despite extensive training, reservations about the use of AI persist. “Some 27-year-old co-workers of mine are very against AI because it takes away people’s ability to think, which I agree that it can,” wrote one employee on Reddit. “But if you use it properly, it’s an amazing, timesaving, job-saving tool.”
Another writer was more negative: “I’m very uncomfortable with AI. I haven’t ever used it in my personal life, and I do not plan on using it ever,” a worker wrote on Reddit. “I’m skeptical about what it is being used for now and what it can be used for in the future.”
Recent studies show some of the obstacles surrounding acceptance. A survey released earlier this year by the research and advisory firm Workplace Intelligence and Writer, the generative AI platform, explored the experiences of knowledge workers and executives on AI in the workplace. Two out of three executives reported that the adoption of generative AI has resulted in tension and division in their company, and some 42 percent said it “is tearing their company apart.”
The survey also showed AI has caused tension between IT teams and other lines of business and some 71 percent of the C-suite executives reported that AI applications are created in an organizational silo.
Another survey of 500 U.S. business leaders in retail, CPG and manufacturing reported in a whitepaper by the data management firm, Stibo Systems, also revealed problems. Some 49 percent of business leaders admitted they are not prepared to use AI in their organizations responsibly. More than half of the organizations reported using AI inconsistently across departments, “resulting in confusion and resistance.”
Still some 89 percent of business leaders said they were eager to rely on this emerging technology for critical decision-making and 86 percent agreed that they want more training on how to responsibly utilize AI.
The whitepaper also suggests investing in AI literacy and upskilling and improving buy-in by having leaders invest in security, ethics and bias mitigation.
Besides investments in upskilling and ethics, leading large teams through AI-related changes requires a different mindset that might not have been needed for other initiatives, says Chris Trout, founder and principal consultant at Donlon Consulting in Rochester, a business and workforce consulting firm.
“It requires recognizing from the start that people will come at it from very different places,” he says. “Some will already be experimenting, others may fear it threatens their jobs, and some will simply be intimidated by the technology.”
“That mix is normal and with a large team, it’s amplified at scale.”
What this requires for managers is proactive ongoing communication, he says, including being clear about goals, creating space for employees to talk with each other, and using what they share to identify and share new practices.
“It also means staying close to the ground, interacting with teams, picking up on what’s working or where there’s concern and amplifying those insights broadly, so everyone sees progress.”
It takes consistent communication, repetition and visible engagement by leaders to help large teams build trust, Trout says.
The Bonadio Group CPA firm in Rochester has worked hard on communications, including forming an AI committee, which includes all staff levels across all service lines. Committee members are involved in pilot programs and have become the company’s AI champions, says John Roman Jr., chief information officer at the company who participated in the recent “Data & AI: Accelerating Adoption and Impact” event at the Genesee Valley Club.
“We explain how AI supports the company’s goals (AI is included in the firm’s 5-year plan) and what’s in it for employees/clients – including time savings, reduced repetitive tasks, or opportunities for more strategic work.”
In the process, the company has learned that it’s vital to address major concerns head-on, he says. “We continually emphasize the opportunities to use AI as a virtual assistant, not that it will replace people. People are very worried about job security and it’s important to acknowledge these concerns.”
Indeed, job security is top priority, says Adina Dinu, founder at Trauma at Work, a UK firm devoted to helping people impacted by adversity to thrive at work. “Address the elephant(s) in the room: is this about job safety?
“AI may feel particularly threatening in this regard. Speak openly about those implications as much as possible. What is the level of skill of the people who struggle with the changes, and what are their circumstances?”
As you talk with your people, be sure to prioritize psychological safety, she says. “Allow people (within reason) to voice their concerns, skepticism or outright opposition. Create a confidential anonymous channel for people to provide feedback,” she says.
And remember that you really can’t overcommunicate or overtrain people at work. “Sometimes people just need more support to change the way they’re doing things. Keep clarifying and upskilling.”
To improve acceptance, sometimes pragmatic approaches are the best. Vincent Schmalbach, a full-stack developer and AI engineer in Germany, offered a few practical tips.
One strategy, he says, is to select a boring job that everyone hates, like hand-sorting customer tickets or migrating data from one system to another. “Let the group see how AI can handle that one job so they can work on more interesting ones. Let them see how much time they can save in their own day before moving on to something bigger.”
Another idea: Work directly with the skeptics.
One of Schmalbach’s clients had a team member who was very worried about the project she was on. “After we had her help design how the AI would handle her most boring daily reports, she became the biggest supporter. She went from saying, ‘This will never work,’ to teaching others how to use it because she had an effect on how it worked,” Schmalbach says.
He suggests pairing people who are having trouble adjusting with those who are early adopters. “But don’t make it look like they’re getting extra help,” he says. “Call it a ‘power user session’ and let them talk about what is and isn’t working. People who are having trouble often see problems that tech-savvy people don’t.”
And when it comes to the elephant in the room — the loss of jobs — be sure to promise that no one will lose their job to AI and keep that promise, he says. “Show them how AI can help them do their job better instead of taking it away. People quickly stop resisting AI when they see it as a tool that makes their work life easier instead of a threat to their jobs.”
Managers at Work is a monthly column exploring the issues and challenges facing managers. Contact Kathleen Driscoll with questions or comments by email at [email protected]
e