Your Genesys Blog Subscription has been confirmed!
Please add genesys@email.genesys.com to your safe sender list to ensure you receive the weekly blog notifications.
Subscribe to our free newsletter and get blog updates in your inbox
Don't Show This Again.
Fairness is one of the pillars of the artificial intelligence (AI) ethics guidelines at Genesys. Often referred to as “moral AI,” fairness is akin to the theories of 18th Century German philosopher Immanuel Kant. He defined moral laws as testable absolutes that are true for all and whose validity doesn’t depend on any ulterior motive or outcome. “Thou shalt not lie,” is a good example of a moral rule, or maxim—and it’s a good place to start this discussion.
The first blog in this series on AI ethics discussed the social benefits of AI and how AI it is enabling life-changing improvements worldwide. In this second article in the series, we examine fairness in AI ethics and the problem of bias.
Fairness of Opportunity and Fairness of Outcome
One way to look at AI and fairness is by the type of fairness being addressed, and whether it’s fairness of opportunity or fairness of outcome. Both types have their place in society.
Fairness of opportunity is found in sports where everyone plays by the rules. The outcome is not predetermined, but everyone accepts the outcome—win or lose—based on the fairness of the game.
Fairness of outcome is found at the family dinner table. When the food is served, everyone gets a fair share. In other words, regardless of how each family member participated in preparing the meal, everyone gets an equal or fair outcome.
A predictive model, such as machine learning, will produce a fair outcome, assuming fair opportunity that is based on:
The key question is not “is there bias?” It’s whether the bias created by the data, questions, and priorities or values is a morally neutral bias. If so, then the outcomes are fair. It’s also important to distinguish between socially unjust bias versus bias that relating to technical weights and biases within the domain of AI.
In other words, tuning an AI predictive model to achieve favorable outcomes is explicitly executed in three ways:
Where Good Intentions Go Wrong
Even when using this methodology, there’s a gray area in tuning AI to eliminate harmful social bias that can be problematic. Pre-ordaining an outcome because you think it’s fair and engineering the bias toward it creates a self-fulfilling prophecy. This can have very negative implications. It suggests that heavy-handedly “gaming the system” or “cheating” in the name of justice is justified. If we load systems with biased data, they are biased.
Google AI Chief, John Giannandrea, also sees the dangers inside algorithms that are used to make millions of decisions every minute. Cloud-based machine-learning systems are designed to be a lot easier to use than the underlying algorithms. While this makes the technology more accessible, it could also make it easier for harmful bias to creep in. For example, news feed algorithms can influence public perceptions. Other algorithms can distort the kinds of medical care a person receives or how they are treated in the criminal justice system.
Facebook has been criticized in recent years for using technology that allowed landlords to discriminate on the basis of race and employers to discriminate on the basis of age. The technology also facilitated gender discrimination by employers to exclude female candidates from recruiting campaigns. All of these areas are protected by law in many countries. There also are global issues to consider. An MIT Media Lab experiment analyzed data from 40 million decisions on moral preferences, revealing how much cross-cultural ethics diverge.
Some suggest that the way to eliminate bias is to publish details of the data or the algorithm used. But many of the most powerful emerging machine-learning techniques are very complex and opaque in how they work. They defy careful examination. In our opinion, the best place to start is at the beginning—with a strong nod to Kant’s centuries-old moral imperatives.
When AI Leads to Fair Outcomes
We’re living in the experience age, when how things happen matters as much as what is happening. With more discussion and understanding of predictive modeling and machine learning, businesses can better foresee potential issues and build fairness into their AI predictive models. After all, a positive customer experience depends on fair outcomes.
Follow our upcoming blog series as we continue to explore AI ethics topics. Join the discussion on AI ethics.
Subscribe to our free newsletter and get blog updates in your inbox.