Your Genesys Blog Subscription has been confirmed!
Please add genesys@email.genesys.com to your safe sender list to ensure you receive the weekly blog notifications.
Subscribe to our free newsletter and get blog updates in your inbox
Don't Show This Again.
Everyone is talking about AI. And I mean everyone. At a brunch attended by a few octogenarians (and me) in the past few weeks, one guest, who admittedly was not quite with it, started ranting about AI. It was clear that he didn’t understand its capabilities and vulnerabilities because he kept calling it “A1.” As tempted as I was to tell him that it’s not a storied steak sauce, I held my tongue. It was just easier (and more polite) than telling him he was wrong.
This blog was written by Martha Buyer, Principal at Law Offices of Martha Buyer, PLLC and Owner, Law Offices of Martha Buyer, PLLC.
Everyone is talking about AI. And I mean everyone. At a brunch attended by a few octogenarians (and me) in the past few weeks, one guest, who admittedly was not quite with it, started ranting about AI. It was clear that he didn’t understand its capabilities and vulnerabilities because he kept calling it “A1.” As tempted as I was to tell him that it’s not a storied steak sauce, I held my tongue. It was just easier (and more polite) than telling him he was wrong.
Artificial intelligence has earned — and will continue to earn — its place in our lives, whether that’s personal, governmental or enterprise. But while AI offers great promise and capabilities, it’s not without significant vulnerabilities. Before diving into its place in the world of customer experience (CX), please consider a few basic strengths and weaknesses. This list is far from exhaustive, but in the context of CX, these are the ones that jump out most readily.
While I am hardly an Eeyore who doesn’t believe in deployment of developing technologies, I remain cautious that the opportunities for misuse of AI-generated data creates significant risks, focused primarily on the fact that if the underlying data isn’t good (for any number of reasons), the output is also likely to be flawed at best and — at worst — dangerous. (Read: It could result in costly litigation in terms of time, money, and, of course, aggravation.) And vulnerabilities increase as there are more variables, but also on increased reliance on tech-driven metrics.
From the earliest days of contact centers (even back when they were called “call centers”), managers have relied on statistics to manage and monitor agent output and productivity. My first job out of college was in one, and I learned in a big hurry how the system could be gamed.
At the end of each month, the agents were ranked. Everyone in our group knew that the agent who finished second every month was smarter than everyone else in the group. But while the agent who finished first was rewarded with bonuses and other opportunities (until she went to jail, but that’s another story), the hero was really the agent who finished second.
How did this happen? As it turned out, the agent who finished first routinely hung up on the calls that were complex so that she could generate the largest numbers of calls answered. This left callers with a problem extra annoyed because they had to call back to get the assistance they needed. The “top” agent wasn’t providing quality customer service; she was simply building her numbers at the expense of everyone else. It wasn’t until a manager heard her do this that she was found out and summarily fired. But because management was focused on the numbers — not the quality of service being provided — she got away with it for a long time.
My takeaway from this experience (aside from desperately wanting a different job) is that if you’re not measuring the right stuff, the statistics are meaningless. Now that algorithms are so much more complex than they were in days of yore, the quality of the data collected remains even more important.
There are no right answers here, but thoughtful consideration is required.
There’s also an important distinction between using AI tools to manage customer interactions (outward facing) vs. using them to manage employee interactions (inward facing). Many employers see that AI-based tools and other automated functions can handle the often-cumbersome tasks of employee scheduling, timekeeping, tracking employee location and calculating wages. And while such tools may be useful for measuring employee productivity, as soon as the employer makes the leap to replace and/or reinforce judgment-based tools with AI tools, ethical issues can come into play in very meaningful — and potentially litigious — ways.
It’s always important to remember that AI tools have no judgment or common sense. As such, employers must recognize that using measurements like keystrokes, mouse clicks or employee presence in front of a camera do not necessarily reflect or provide accurate measurements of productivity. To judge employee performance based on these metrics alone is beyond risky. It’s reckless.
Further, the US Department of Labor’s Wage and Hour Division has recently issued guidelines for AI use in the workplace1. This document contains critical guidance for the employer, particularly with respect to managing the gap between what new technologies offer and what the law requires.
According to the document, “As new technological innovations emerge, the federal laws administered and enforced by W[age and] H[our] D[ivision] continue to apply, and employees are entitled to the protections these laws provide, regardless of the tools and systems used in their workplaces.”
On the outward-facing side, it’s important to ask: Are the efficiencies that AI tools provide coming at the cost of “good” customer service? It’s difficult to imagine that any customer ending up at a contact center is happy about it.
To me, contact centers often exist to insulate companies from their customers rather than to actually serve them. With this skepticism, the more customer “un-friendly” the contact center is, the less likely I am to want to do business with that entity. However, in most cases, there’s no choice.
While AI tools may streamline service delivery as they provide potentially useful metrics (keyword again is potentially), AI systems aren’t yet sufficiently sophisticated enough to replace the value and accuracy of human interactions. That’s not to say that AI tools don’t have a role in the contact center. But that place must be in conjunction with complementary tools and processes.
Blair Pleasant, President and Principal Analyst at COMMfusion recently commented, “With time, the technology will certainly continue to evolve and become increasingly sophisticated, at this point, generative AI may be too unreliable and potentially inaccurate to use as the primary customer interface. Without sufficiently strong and appropriate guardrails including properly-trained models, as an example, there’s a great risk that the AI output will provide misinformation, which can be damaging to the brand and the customer relationship. If you’re eager to start using AI in your customer-facing tech, the best solution for now is to use a combination of AI technologies to get the benefits of generative AI while minimizing the risk.”
AI tools are precisely that — tools. Reliance on them can certainly aid in the handling of some complex number-crunching analytical functions within the enterprise. But even when used to perform simple tasks, absolute vigilance is required.
To learn more about AI use in the enterprise, visit Martha Buyer online or contact her directly via email.
1 Field Assistance Bulletin No. 2024-1, “Artificial Intelligence and Automated Systems in the Workplace under the Fair Labor Standards Act and Other Federal Labor Standards.”
Subscribe to our free newsletter and get blog updates in your inbox.