Your Genesys Blog Subscription has been confirmed!
Please add genesys@email.genesys.com to your safe sender list to ensure you receive the weekly blog notifications.
Subscribe to our free newsletter and get blog updates in your inbox
Don't Show This Again.
Chatbots are becoming more conversational and it’s not always clear to consumers whether they’re interacting with a machine or a human. The orchestration of complex customer relationships, such as the triage of incoming calls or attribution of personalized offers, increasingly leverages artificial intelligence (AI). And that raises questions about the fairness of the decisions that machines make.
Most AI-powered programs are locked in so-called “black boxes.” This lack of transparency can introduce trust issues with consumers and negatively affect your brand.
The First Step in Transparency
GDPR privacy regulations in the European Union are much more stringent than those in the US. Many thought leaders are mapping that GDPR level of privacy to domestic practices. In July 2019, the state of California will implement a new bot transparency law that likely will expand nationally. The law was designed to protect consumers and voters from deception by bots posing as real people. For example, laws like this can stop malicious bots from being used to sway elections via abuse of social media platforms.
Bot transparency laws may have little impact on the rapid deployment of bots in business. But they could impact the effectiveness of bots for sales or marketing purposes, especially over the phone. For example, a person who receives a call from a bot that says upfront “I’m a robot” might be inclined to hang up or be more resistant to the content of the message. In the long run, “bot warnings” could become the new software “terms and conditions”—legal language that few people actually read.
When More AI Transparency Leads to Less Trust
The Harvard Business Review cited a study revealing that, while users won’t accept blanket black box models, they don’t need or want extremely high levels of transparency either. In fact, high transparency can backfire and erode trust. In this study of school exams, some students were told they’d received a grade from their peers. Others received greater transparency with details on grade calculations that were adjusted for peer bias. They then were asked to rate their trust in the process.
The results showed that when grade expectations were met, transparency had no impact. But when expectations were violated, the students looked to transparency in the algorithm to explain the reasons for the grade. The study confirmed a human tendency to expect a clear explanation of an outcome when expectations are violated. Yet, if people believe the underlying process is fair, the impact on trust is reduced.
White Box Versus Black Box
Decisions that machines make should be explainable when required. Think of healthcare and its AI-powered medical imagery, or the self-driving cars of the automotive industry. Explaining how decisions are made is increasingly important for all industries in which AI is proving effective, including customer experience.
“Explainable AI” is becoming one of the major technological challenges of the ongoing AI revolution. The US Defense Advanced Research Projects Agency (DARPA), and various other research institutes, invest heavily in finding ways for algorithms to help humans interpret AI-powered decisions. It’s a paradox. Consider modern machine learning, as in deep neural networks. These networks are fed by a profusion of data and have virtually unlimited compute power that delivers unprecedented results. But it’s never been more difficult to explain how they arrive at their decisions.
For instance, seamless collaboration between bots and humans requires complex predictive routing algorithms to match individual customers with the best representative, based on thousands of criteria. Genesys uses machine learning to make these routing decisions where rules fall short. The business performance of such an AI-powered system is most relevant. But we also have many customers whose administrators want to know how that routing happens. They want transparency into the factors used to make those decisions. It’s a question of white box, which is total transparency, versus black box, which reveals nothing about how the algorithm works.
These expectations encouraged us to build a good level of transparency into our predictive routing. This transparency explains which factor plays a role in optimal matching decisions, and to what degree. In this case, transparency enables businesses to improve employee training or add constraints to the system to avoid potential biases.
Find the Right Level of AI Transparency
Ensuring that customers are informed when they’re conversing with a bot is an important step in integrating transparency into your AI strategy. There are broader issues as well. A white box approach that reveals all data won’t necessarily give customers usable information, but a black box is typically not acceptable either. A better approach may be to provide customers with basic insight into the factors that drive algorithmic decisions, and how that input is analyzed.
The challenges of transparency make it incumbent upon businesses to look at AI ethics holistically—from the point of view of customer rights and expectations.
This blog is part of an ongoing series that explores issues in AI ethics. Recent posts include The Social Benefits of Artificial Intelligence and Grappling With Fairness in AI Predictive Models.
Subscribe to our free newsletter and get blog updates in your inbox.