Bringing AI to customer service teams isn’t just a matter of saving costs by employing fewer agents — it’s also about raising their game. Following the launch of its AI analytics tool last month, we spoke to Intercom co-founder Des Traynor about the changes AI is bringing to the Customer Service (CS) role and to customer experience.
This final addition to the Fin AI line-up is one I’d been waiting to find out more about, having had a preview last year when I spoke to Paul Adams, Intercom’s Chief Product Officer. At the time, the customer service and support vendor was introducing Fin AI Copilot, an intelligent assistant for CS agents. This was the second phase in its AI strategy, following on from Fin AI Agent, an automated intelligent agent that some customers had found could handle as many as 70% of incoming queries without having to hand over to a human colleague. The Copilot was designed to help those people resolve the remaining queries. But what really intrigued me was the final element in the plan, which has now been launched — Fin Insights.
This new functionality provides AI analysis of the responses that have been given, which means that organizations can now identify and fix gaps in the answers they’re providing. One aspect of this is called topic suggestions, where the AI analyzes how human agents have resolved issues it couldn’t handle automatically. If it notices a recurring pattern, it can create new documentation based on that, offer it for approval, and then once approved, refer to it on future occasions. Traynor says that AI company Anthropic, an early user of the feature, has seen it drive a 5% improvement in the number of cases being resolved by the AI agent. He goes on:
They’ve done that entirely by Fin pointing out, effectively, gaps in its documentation. That is this really beautiful flywheel. The way we frame it a lot is like, your humans now answer questions for the first time, but hopefully the last time. So once they answer, Fin picks up the answer and goes, ‘If that’s true, I’ll use that information from here onwards.’
Overcoming the flaws in CSAT
Another aspect of the AI analysis is based on an automated mechanism for measuring satisfaction when customers interact with the CS function. The traditional way of measuring Customer Satisfaction or CSAT has always been flawed, firstly because it’s based on customers actively choosing to respond to a survey, which skews the results by excluding those that can’t be bothered to respond — according to a recent survey of Intercom customers, as few as one in 12 actually complete surveys. More importantly when it comes to CS, consumers generally see it as an evaluation of the support person they spoke to rather than the company or service itself. Traynor comments:
We’ve noticed that CSAT, generally speaking, is, people tend to rate the agent despite expressing a lot of anger in the conversation. So it will go something like this, ‘I hate your company. I hate your refund policy, but it’s not your fault, Phil, four out of five for you. But I’m very frustrated.’
We often wonder, what are we actually measuring here? I think what we’re measuring is, did Phil do an okay job executing a customer-hostile policy? The answer might be yes, but actually, we shouldn’t call that customer satisfaction, right? Because it’s quite different in a sense. Is the customer satisfied? Absolutely not. Is it Phil’s fault? Also no.
Using AI makes it possible to analyze every aspect of the customer’s CS interaction in context and measure a whole variety of signals to produce what Intercom calls a CX score. This takes into account whether the customer’s issue was actually resolved — or all of them, if they had multiple issues — what customer sentiment was detected, and the quality of service, across factors such as tone, knowledge and timeliness. The approach has thrown up some notable findings. Looking across a range of customers in different industries, Intercom has found that human agents score 10% lower than they do with traditional survey-based CSAT, while AI agents do 14% better. The company points out that a big part of this difference is most likely because human agents are now dealing with the tougher issues that AI agents can’t solve, but the data also firmly supports the view that CSAT responses have always had a skew towards happier customers.
Now that organizations can review a comprehensive score across all interactions, it becomes possible to identify those areas where satisfaction is neutral or declining — an early warning signal that something’s awry. Traynor goes on:
Our businesses have never really been able to listen to every spoken or written word of every customer that they’ve ever had. Now they can, because of AI, and we can now grade them and show them where the frustration is. What we see a lot is that companies might be like, ‘Hey, our customer satisfaction, we always thought it was fine. What you guys have shown us is… with billing questions, it’s extremely low.’
That’s then giving them the ability to dig in and be like, ‘Why?’ We will then surface the things that caused the low score. Customers are frustrated. Customer didn’t expect a delay. Customers wonder why you can’t refund immediately. Whatever.
Sometimes, of course, the cause of customer dissatisfaction may be beyond the control of the CS team. They can’t fix flight delays at an airline. But what the analysis lets them do is identify where they can make a difference. He adds:
Being able to find the ‘addressable market’, if you like, within all of the customer pain, is actually really valuable.
How CS roles will change
Adding analysis that checks up on agent performance and is able to suggest potential improvements closes the circle on the AI story, so that it’s not just automating existing processes but also enhancing them and making them work better. This in turn changes the nature of the work done by people in CS teams. Traynor comments:
There’s been a lot of time working on building the agent. The next challenge is now building the tooling around the agent. How do you manage the agent? How do you improve the agent? …
Ultimately, whoever manages the agent, it’s like they’re now accountable for at least half of the work. So you want to make sure that you’re giving them the right software to do that.
While he believes AI will ultimately reduce the number of staff companies employ in frontline customer support roles — the people who directly answer customers’ questions on the phone or via chat — he sees changes rather than an overall reduction in the work that CS teams do. He goes on:
Most support teams in any at-scale company would describe themselves as being snowed under, too much pressure, not enough time, queues too long, can’t spend time on important issues, that type of thing…
What we see initially is people turn on Fin, and there’s much more of a reaction of, ‘My God, we can finally get our head above water!’ What that looks like is, now that we can get rid of the growth of the queue, let’s make sure that we’re spending enough time on the really important, sensitive issues. And then maybe, let’s update our documentation. And then, let’s start with reporting the real issues over to the product team. So the work finds a new home, initially at least.
Rather than laying off staff, there’s also the option of finding new tasks for them to do, such as making proactive calls out to customers about upgrades and renewals, helping to lift revenue. Over time, the attrition rate in most CS teams will likely shrink headcount anyway without the need for forced layoffs. Those that remain will have more responsibility for managing the AI agents and keeping documentation and processes up-to-date. He goes on:
What you end up with is a team, a smaller team, ultimately, of support specialists, I would say. People who know the product very well, so they can handle the really complicated things. If there is a real weird edge-case bug, they know how to diagnose it…
I really believe, in a matter of one year, there’ll be roles called AI Support Leader, AI Support Specialist, and people who claim expertise in AI for removing support issues, etc. That career path didn’t exist. If you’re a CS frontline worker, best case, you might be like, managing three workers in a year. But you were never getting out of the call center, in a sense. Whereas this, I think, offers a genuine, highly skilled career path that’s going to be very relevant in the future.
My take
One of my worries about the use of agents in roles such as customer support is that they’ll just automate the existing workload but make no improvements to the underlying processes. This is a particularly relevant worry when vendors, like Intercom, are charging per ticket handled or resolved. That potentially creates a disincentive for the vendor to help its customers fix product issues that create extra calls to their CS teams. By the way, I put this exact point to Traynor, and he told me that he’d much rather have customers who not only want to fix their current offerings but also go on to create new products that generate new issues for CS to deal with, which I thought is a good way of looking at it.
So I’m glad to see the addition of these analytic tools that customers will be able to use not only to automate more of the existing workload, but also to identify areas where better documentation or a change to the product or proposition will stop issues arising in the first place. That’s using the power of AI to go beyond routine automation and actually build in feedback loops that continue to improve how the business operates over time.