Your Genesys Blog Subscription has been confirmed!
Please add genesys@email.genesys.com to your safe sender list to ensure you receive the weekly blog notifications.
Subscribe to our free newsletter and get blog updates in your inbox
Don't Show This Again.
We’re seeing a surge of activity — experimentation, piloting and implementation — in the customer service space regarding new artificial intelligence (AI)-enabled tools, particularly those aimed at helping customers more easily self-serve and improving agent efficiency for live interactions.
And generative AI is powering many of these new initiatives, particularly those focused on agent enablement. However, given the inherent risk of hallucinations within generative AI technologies, these new tools tend to come with a “Human in the Loop” (HITL) facility built in.
This blog was written by Adrian Swinscoe, Customer Experience Advisor, Author, Speaker, Workshop Leader and Aspirant Punk at Punk CX.
We’re seeing a surge of activity — experimentation, piloting and implementation — in the customer service space regarding new artificial intelligence (AI)-enabled tools, particularly those aimed at helping customers more easily self-serve and improving agent efficiency for live interactions.
And generative AI is powering many of these new initiatives, particularly those focused on agent enablement. However, given the inherent risk of hallucinations within generative AI technologies, these new tools tend to come with a “Human in the Loop” (HITL) facility built in.
HITL machine learning is a type of human-machine collaboration that involves people actively participating in the use, evaluation and training of AI models and systems. In a customer service setting, agents provide feedback on the systems’ predictions, suggestions and annotations. This helps the system’s models learn and improve.
Imagine, for example, that a customer sends an email regarding a problem they’re having. The system would automatically “read” the customer’s query and based on that, would recommend a knowledge base article or a particular solution that will help solve the customer’s issue.
The agent will then — based on their own training, experience and understanding of the problem — assess whether that article or proposed solution will solve the customer’s problem, decide whether to use that recommendation or not, and at the same time, provide feedback to the model regarding the accuracy and appropriateness of its suggestion.
Now, that’s all very well, very sensible and pretty exciting when you consider that accessing the right type of information quickly is one of the biggest bugbears for both agents and customers alike. It’s also a great example of leveraging the Voice of the Employee (VoE) and seeing agent intelligence in action.
But stepping back and reflecting on this does raise some questions. It doesn’t raise questions with the advent of HITL practices in these applications — I applaud the use of HITL for monitoring, moderating and training AI-based systems. But it leaves me wondering why we aren’t leveraging the insights, experience and expertise of agents in other areas of our business where they could help improve performance and efficiency, particularly across the many ways that we interact with customers and the tools that we and they use.
Take chatbots, for example. Whether they’re rudimentary rules-based bots, conversational AI bots, or, more recently, bots powered by generative AI, chatbots are and will continue to be central to many brands’ service strategies. This is evidenced by the Deloitte Digital Global Contact Center Survey, which found that nine out of 10 global service leaders expect to invest in additional self-service capabilities in the next two years — with virtual agents and/or chatbots playing a crucial part in that mix. Moreover, McKinsey found that customer self-service bots will top the AI investment priority list for customer service leaders in the next 24 months.
The challenge is that, despite the passage of time and new technological advancements, many customers still don’t really like bots. “The State of Customer Experience” report from Genesys found that satisfaction with chatbots has declined in recent years. In 2017, 35% of consumers said they were highly satisfied with chatbots; in 2022, just 21% said the same.
They aren’t alone. According to Harvard Business Review, when a team of Oxford University academics conducted a study on customer service chatbot interactions at a telecommunications company — analyzing 35,000 chatbot interactions — it found that 66% received a 1 out of 5 satisfaction rating.
Some of the main reasons customers cite for not liking chatbots includes a lack of personalization, an inability to resolve issues that are anything other than simple and straightforward, an inability to escalate to live web chat, creating dead ends in conversations, and a general lack of trust surrounding the technology and concerns about data and privacy.
However, the Oxford University academics research concluded that if firms are to deliver better outcomes and higher satisfaction rates for customer service chatbot interactions, “…it is important to both carefully design chatbots and consider the emotional context in which they are used, particularly in customer service interactions that involve resolving problems or handling complaints.”
I believe that one of the biggest reasons this emotional context deficit exists is that companies don’t include real customer insight and conversation experts in the chatbot conception and design. Let me illustrate how this often manifests itself in practice with a couple of questions:
The answer to both questions, more often than not, is your customer service team.
Now, ask yourself this:
I would wager that very few, if any at all, are involved in the conception and design of the conversations that you would like your chatbots to have with your customers.
That’s a mistake.
But it’s not surprising, particularly when you consider that organizations traditionally don’t involve or consult their employees when it comes to technology decisions that affect them or their customers.
For example, a PwC global survey of about 12,000 people throughout Canada, China, Hong Kong, Germany, India, Mexico, the UK and the US found that 90% of C-suite executives believe their company pays attention to people’s needs when introducing new technology, but only about half (53%) of staff say the same. Moreover, 73% of the people surveyed report that they know of systems that would help them produce higher quality work, but many executives and leaders aren’t tapping into the collective intelligence of their employees.
This must change. And not just in the domain of chatbot conception, design and effectiveness.
To be successful — and to drive better outcomes for our customers, employees and businesses — we need to go further and be more intentional about how we leverage the collective intelligence, experience and expertise of the network of biological supercomputers that already reside within our organizations, i.e., our people and particularly our customer service agents.
Whether that is through enhanced and extended use of techniques like VoE initiatives to better gather their valuable input and insights — or the proactive involvement of customer service agents (my preferred option) in the conception and design of the many ways that we interact with customers and the tools that we and they use.
Only then will we be able to harness the power of both types of AI: artificial intelligence and agent intelligence.
To learn more, check out Adrian’s blog, download a podcast episode or email him.