OpenAI CEO Sam Altman has said that he is “confident” that AI will first replace customer service jobs as it changes the job market. Speaking on a recent episode of “The Tucker Carlson Show”, Altman said, “I’m confident that a lot of current customer support that happens over a phone or computer, those people will lose their jobs, and that’ll be better done by an AI.” Apart from customer service jobs, Altman also suggested that programmers could be next. “Someone told me recently that the historical average is about 50 per cent of jobs significantly change… every 75 years, on average. My controversial take would be that this is going to be like a punctuated equilibria moment where a lot of that will happen in a short period of time,” he explained. Altman now believes the staffing structure of contact centres will shift rapidly, a shift from his earlier prediction that human customer service would disappear entirely. However, he adds that roles requiring human connection, such as nursing, are unlikely to vanish.
“No matter how good the advice of the AI is or the robot, you’ll really want that,” he explained. Meanwhile, support agents who are also capable of providing reassurance, especially when it comes to vulnerable customers.Still, what Altman thinks isn’t a new idea. Oracle announced last year its ambition to automate “all” of customer support, while Salesforce CEO Marc Benioff recently highlighted cutting 4,000 live agents from the company’s support team.Yet, many industry professionals have greeted the OpenAI CEO’s latest predictions with scepticism. And with Gartner forecasting that by 2027, half of companies will walk back plans to shrink their customer support headcount, that cautionary note seems especially relevant.
What is giving Sam Altman sleepless nights
In the same interview, Altman also admitted that he “doesn’t sleep that well at night,” pointing to the ethical and moral responsibility of leading a company whose AI chatbot is used by hundreds of millions each day. He explained that his biggest concern lies in the small decisions around model behaviour that can carry significant real-world consequences.“Look, I don’t sleep that well at night. There’s a lot of stuff that I feel a lot of weight on, but probably nothing more than the fact that every day, hundreds of millions of people talk to our model. I don’t actually worry about us getting the big moral decisions wrong, and maybe we will get those wrong too,” Altman added.