Everyone is talking about AI. And I mean everyone. At a brunch attended by a few octogenarians (and me) in the past few weeks, one guest, who admittedly was not quite with it, started ranting about AI. It was clear that he didn’t understand its capabilities and vulnerabilities because he kept calling it “A1.” As tempted as I was to tell him that it’s not a storied steak sauce, I held my tongue. It was just easier (and more polite) than telling him he was wrong.

This blog was written by Martha Buyer, Principal at Law Offices of Martha Buyer, PLLC and Owner, Law Offices of Martha Buyer, PLLC. 

Everyone is talking about AI. And I mean everyone. At a brunch attended by a few octogenarians (and me) in the past few weeks, one guest, who admittedly was not quite with it, started ranting about AI. It was clear that he didn’t understand its capabilities and vulnerabilities because he kept calling it “A1.” As tempted as I was to tell him that it’s not a storied steak sauce, I held my tongue. It was just easier (and more polite) than telling him he was wrong.

Artificial intelligence has earned — and will continue to earn — its place in our lives, whether that’s personal, governmental or enterprise. But while AI offers great promise and capabilities, it’s not without significant vulnerabilities. Before diving into its place in the world of customer experience (CX), please consider a few basic strengths and weaknesses. This list is far from exhaustive, but in the context of CX, these are the ones that jump out most readily.

Strengths of AI in Customer Experience

  1. AI tools can crunch huge volumes of data with relative ease. It can be used to power through complex algorithms to yield results that might be very useful in managing processes and outcomes.
    When properly deployed, monitored and managed, AI tools might add efficiencies to contact center operations. However, the keyword here is And it comes with a giant disclaimer: The results may not yield the appropriate result. To clarify, it’s critical to validate that the tools used are generating the right outcomes. Read on to see how easily this can go off the rails.
  2. According to Forrester Research, generative AI can be used to supplement existing data with “synthetic equivalents while preserving confidentiality and privacy.” The keyword here is supplement, and I can’t stress how important this word is in this context.

Vulnerabilities of AI in Customer Experience

While I am hardly an Eeyore who doesn’t believe in deployment of developing technologies, I remain cautious that the opportunities for misuse of AI-generated data creates significant risks, focused primarily on the fact that if the underlying data isn’t good (for any number of reasons), the output is also likely to be flawed at best and — at worst — dangerous. (Read: It could result in costly litigation in terms of time, money, and, of course, aggravation.) And vulnerabilities increase as there are more variables, but also on increased reliance on tech-driven metrics.

  1. The more complex the algorithm, the greater the likelihood of chinks in the armor, allowing malicious actors access to information that should be — and is expected to be — confidential. The larger the dataset, the greater the danger from attacks, because such attacks can wreak havoc on a much larger scale. As such, extra vigilance is required.
  2. As the news of more attacks reaches more people, and while AI tools might not have contributed to the problems, those people on the inside and outside of the enterprise will have decreased trust in the process — whatever the process is.
  3. Without proper guardrails (and who knows if such tools — including written guidelines or policies — even exist), the likelihood of copyright and other intellectual property rights can and will be violated, subjecting the entity relying on the AI output to litigation. I suspect this is less of an issue in the contact center context, but it’s worth considering.
  4. The presence of AI tools in the contact center also creates additional opportunities for the rumor mill to run wild. Considerations of employees/contractors being replaced by AI certainly can create both opportunities for improved productivity while creating additional stress and challenges to employee morale. It’s not hard to imagine how such speculation could make people fear for their livelihoods, whether their concerns are valid or not.
  5. Although there hasn’t been any well-publicized litigation to date as the result of AI system failures — or human failures driven by reliance on AI outcomes — it’s surely coming. Humans make plenty of errors on our own, but we rely — at varying degrees — on systems that use historic data. After all, AI systems can only use data that was collected in the past, which makes it ripe for legal challenges and sizable jury verdicts.

The Role of AI Tools in the Contact Center

From the earliest days of contact centers (even back when they were called “call centers”), managers have relied on statistics to manage and monitor agent output and productivity. My first job out of college was in one, and I learned in a big hurry how the system could be gamed.

At the end of each month, the agents were ranked. Everyone in our group knew that the agent who finished second every month was smarter than everyone else in the group. But while the agent who finished first was rewarded with bonuses and other opportunities (until she went to jail, but that’s another story), the hero was really the agent who finished second.

How did this happen? As it turned out, the agent who finished first routinely hung up on the calls that were complex so that she could generate the largest numbers of calls answered. This left callers with a problem extra annoyed because they had to call back to get the assistance they needed. The “top” agent wasn’t providing quality customer service; she was simply building her numbers at the expense of everyone else. It wasn’t until a manager heard her do this that she was found out and summarily fired. But because management was focused on the numbers — not the quality of service being provided — she got away with it for a long time.

My takeaway from this experience (aside from desperately wanting a different job) is that if you’re not measuring the right stuff, the statistics are meaningless. Now that algorithms are so much more complex than they were in days of yore, the quality of the data collected remains even more important.

  • Are the right questions being asked?
  • How old is the data that’s being used to potentially project outcomes?
  • How are various factors in the algorithm weighted to yield outcome?
  • Are the questions loaded to guarantee results or provide information useful for improving processes?

There are no right answers here, but thoughtful consideration is required.

There’s also an important distinction between using AI tools to manage customer interactions (outward facing) vs. using them to manage employee interactions (inward facing). Many employers see that AI-based tools and other automated functions can handle the often-cumbersome tasks of employee scheduling, timekeeping, tracking employee location and calculating wages. And while such tools may be useful for measuring employee productivity, as soon as the employer makes the leap to replace and/or reinforce judgment-based tools with AI tools, ethical issues can come into play in very meaningful — and potentially litigious — ways.

It’s always important to remember that AI tools have no judgment or common sense. As such, employers must recognize that using measurements like keystrokes, mouse clicks or employee presence in front of a camera do not necessarily reflect or provide accurate measurements of productivity. To judge employee performance based on these metrics alone is beyond risky. It’s reckless.

Further, the US Department of Labor’s Wage and Hour Division has recently issued guidelines for AI use in the workplace1. This document contains critical guidance for the employer, particularly with respect to managing the gap between what new technologies offer and what the law requires.

According to the document, “As new technological innovations emerge, the federal laws administered and enforced by W[age and] H[our] D[ivision] continue to apply, and employees are entitled to the protections these laws provide, regardless of the tools and systems used in their workplaces.”

On the outward-facing side, it’s important to ask: Are the efficiencies that AI tools provide coming at the cost of “good” customer service? It’s difficult to imagine that any customer ending up at a contact center is happy about it.

To me, contact centers often exist to insulate companies from their customers rather than to actually serve them. With this skepticism, the more customer “un-friendly” the contact center is, the less likely I am to want to do business with that entity. However, in most cases, there’s no choice.

While AI tools may streamline service delivery as they provide potentially useful metrics (keyword again is potentially), AI systems aren’t yet sufficiently sophisticated enough to replace the value and accuracy of human interactions. That’s not to say that AI tools don’t have a role in the contact center. But that place must be in conjunction with complementary tools and processes.

Blair Pleasant, President and Principal Analyst at COMMfusion recently commented, “With time, the technology will certainly continue to evolve and become increasingly sophisticated, at this point, generative AI may be too unreliable and potentially inaccurate to use as the primary customer interface. Without sufficiently strong and appropriate guardrails including properly-trained models, as an example, there’s a great risk that the AI output will provide misinformation, which can be damaging to the brand and the customer relationship. If you’re eager to start using AI in your customer-facing tech, the best solution for now is to use a combination of AI technologies to get the benefits of generative AI while minimizing the risk.”

AI tools are precisely that — tools. Reliance on them can certainly aid in the handling of some complex number-crunching analytical functions within the enterprise. But even when used to perform simple tasks, absolute vigilance is required.

To learn more about AI use in the enterprise, visit Martha Buyer online or contact her directly via email.

1 Field Assistance Bulletin No. 2024-1, “Artificial Intelligence and Automated Systems in the Workplace under the Fair Labor Standards Act and Other Federal Labor Standards.”