Your Genesys Blog Subscription has been confirmed!
Please add genesys@email.genesys.com to your safe sender list to ensure you receive the weekly blog notifications.
Subscribe to our free newsletter and get blog updates in your inbox
Don't Show This Again.
Welcome to the age of the discrimination engine. Recently, the US Federal government drafted a bill that allows lawmakers to scrutinize algorithmic bias. Some are in favor of this since we’ve seen occurrences of artificial intelligence (AI) enabling discrimination against women and minorities. Others, like Daniel Castro, a spokesperson for the Information Technology and Innovation Foundation believe the law only stigmatizes AI and discourages its use.
But this is what machine learning algorithms do; they apply weights and biases against certain features to spot patterns and make predictions. Algorithmic analysis of weighted and biased feature sets can attempt to predict outcomes, but they won’t predict them perfectly. They don’t have to; they only need to predict outcomes with enough accuracy and confidence to significantly improve the achievement of business outcomes.
This could mean the algorithms only have to be right four out of five times. But what does this mean for the fifth case? And what does that mean for individuals implicated in the fifth case? Were they victims of erroneous bias or unfairly treated? Simply put, they were wrongly discriminated against.
This isn’t new; it’s been happening long before the explosion of AI solutions in the workplace. When you buy car insurance, for example, your age is a heavily weighted feature that determines the price you pay — even if you’re the safest driver in the world. High school test scores are another example; they’re used as predictors of university success and are highly weighted and biased within the admissions models. In fact, a growing body of evidence suggests early grades don’t predict success in university — or in life. Both examples show that systemic, unfair bias is nothing new — it’s prevalent and shows no signs of disappearing.
But not all examples of unfair bias are the same. As a society, we’ve decided that some features shouldn’t be used when we make discriminating choices. We feel so strongly about the ethical implications that we have written them into our laws. The US Civil Rights Act of 1964 provides protections against discrimination based on race, color, religion, sex or national origin. This same law ensures protections in the areas of voting, education, employment and public accommodations.
Why single out certain features as protected versus others? Note that discrimination based on age and IQ scores aren’t explicitly included in this list. Some say that these protected features are called out by law because they’re not predictive; therefore, it’s unethical to discriminate based on them.
There’s still a stronger argument for these laws. It has nothing to do with any potentially predictive nature of a feature. And I’m not arguing that these features are, or are not, predictive. But basing predictions on them should be unethical as well as illegal because we’ve seen the historic harm and abuse these types of erroneous or unfair biases can have. The harm far outweighs any good that could possibly come from their potentially predictive nature.
Discrimination and How You Use Data
As AI engines consume more data in more sophisticated ways, we need to approach discrimination differently. The evolution of deep learning systems is trending toward human beings becoming incapable of granularly identifying if these engines came to their conclusions in an inappropriately biased way. And the large amounts of data and number of hidden layers could make this nearly impossible.
One litmus test would be to ask if the actions triggered by these engines could be interpreted as exclusionary or punitive. This consequential approach helps us pierce the veil of these increasingly complex and opaque systems.
For example, I was born in a small town on the northern coast of Nova Scotia. I’m sure that this minuscule bit of information is hidden away in myriad databases that are accessible to many companies and agencies. It’s a feature that can be biased and an opportunity for me to be discriminated against. But what could agencies do with this type of discrimination? If the ensuing action was exclusionary or punitive, I would say that I was a victim of unfair and unethical discrimination.
On the other hand, if this discriminating feature was used to play on-hold music that reminded me of my childhood, I might become more loyal to this company’s brand and service offerings. In both cases, a machine learning engine — a discrimination engine — used the same feature to trigger a course of action. It’s not necessarily the bias that’s unethical — it’s the action that the bias triggers.
Even seemingly innocuous uses of bias should be suspect and subject to scrutiny. The most well-intentioned companies should exercise due diligence when basing actions on the output of machine learning algorithms. At the same time, we can’t bury our heads in the sand; we can’t hold back advancements of technology. What we can do is face the future with eyes wide open and engage in a well-informed and, hopefully, unbiased debate as we enter the age of the discrimination engine.
Stay up to date on our AI ethics blog series and join the conversation on AI ethics online.
Subscribe to our free newsletter and get blog updates in your inbox.