Your Genesys Blog Subscription has been confirmed!
Please add genesys@email.genesys.com to your safe sender list to ensure you receive the weekly blog notifications.
Subscribe to our free newsletter and get blog updates in your inbox
Don't Show This Again.
The General Data Protection Regulation (GDPR) was recently introduced in Europe and has changed the landscape of data privacy. With a focus on an individual’s right to privacy, the EU has taken the global lead in protecting individual data. This post isn’t a comprehensive guide to GDPR, but this blog entry is meant to be an overview of some specific concepts in GDPR that should be considered regarding AI and its ethical implications.
AI and GDPR Moving Forward Together
GDPR appears to create a large hurdle for AI implementation, but it’s also an opportunity to ensure that your AI is maximizing the privacy of the individuals affected. GDPR will eventually build public trust for AI If it’s embraced by companies that process data. Transparent use of AI, with a focus on privacy, will make it much more palatable to a general public that has become cautious of data breaches and AI. Data privacy regulations continue to evolve, but GDPR is here to stay and will be regulating AI for a long time to come.
In the MIT Review article Humans + Bots, Djamel Mostafa from Orange Bank noted the difficulty of data collection regulations, within GDPR, for use in AI tools to elevate the bank’s customer experience. “These are complex systems based on deep learning and artificial neural networks, and are still something of a black box,” Mostafa said, “which does not always comply with the ‘explainability’ requirement in the banking sector for instance.”
More data means not just collecting it in more places, but holding it for longer periods of time. With GDPR stipulations regulating that data not be held longer than needed for its stated purpose, it is unclear if innovation is being stifled. If this data does improve the customer experience, at what cost is the improvement to a user’s privacy? User data is essential to improving AI accuracy in personalization for the customer experience. It can take years to accrue enough data for credible insights into the customer journey. An expiration date on the data could compromise the effectiveness of the tool. As the MIT Review noted, there is a fine line in using AI for a level of customer personalization that could be considered invasive.
Automated Decision Making and Profiling
A set of specific GDPR provisions are targeted towards AI-based decisions, specifically concerning automated decision making (“ADM”) and profiling. Profiling means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person. This may include analyzing online behavior for targeted marketing/advertisements, analyzing credit history to create a credit profile of individuals, or analyzing qualifications and online presence to assess a candidate’s skill set.
ADM is defined as the process of deciding by automated means without any human involvement. This can be anything from having an algorithm that decides if a person gets a loan, whether a candidate should get a job interview, or which call-center agent is best suited to answer a customer’s concern.
Your organization should keep the following topics in mind when utilizing ADM and profiling.
Information Gathering.
The first step is to ensure your organization has a clear understanding of the personal data that is being collected and how it’s being processed. Organizations should document what data is being collected (what kinds of data are being collected, from whom, and through what channels). It’s also important to document and understand how the ADM operates (what data does it use, what decisions does it make). It’s important here to note whether there are any significant effects of the ADM.
Risk Assessment.
Your organization should perform a data protection impact assessment (“DPIA”) before utilizing any ADM with personal data. This DPIA is a way for your organization to evaluate the privacy risks of your ADM processing. The goal of the DPIA is to examine the risks to data subjects during the entire life-cycle if processing. Make sure the DPIA weighs the risks of the data subject through each step of processing. If the DPIA discovers any risks, your organization should develop and implement mitigation strategies.
Establish the Basis for Lawful Processing.
GDPR places restrictions on ADM when there may be ‘legal’ or ‘similarly significant’ effects on individuals (for example, the right to vote, exercise contractual rights, or effects that influence an individual’s circumstances, behavior or choices). If the ADM does have these significant effects, the data controller is required to have consent of the individual involved or conduct the ADM to fulfill a specific contractual obligation with the individual.
If your ADM does not have a legal or similarly significant effect on the individual, your organization can still conduct the ADM if it’s for a “legitimate interest” of your organization (balanced against the rights of the individual). So, for example, if ADM is directing calls to call-center agents, the legitimate interest would be better resolution of customer issues, and that is balanced well against a minimal impact to the end customer.
Managing Third Parties.
If your company uses a third-party vendor for ADM services, it’s important to carry out the appropriate due diligence. Make sure you understand the security and privacy controls that your vendor uses, and that you are comfortable with those controls. Ask your vendor if they have any relevant industry certifications, these can be helpful in verifying vendor assessments. You should also have your third-party vendor cooperate with you in completing your DPIA.
AI and GDPR Moving Forward
Lilian Edwards, a law professor at the University of Strathclyde, highlights that big data for use in AI is a direct opposition to the purposeful limiting of data collection, and retention. “It challenges transparency and the notion of consent, since you can’t consent lawfully without knowing to what purposes you’re consenting,” Edwards said, “Algorithmic transparency means you can see how the decision is reached, but you can’t with machine-learning systems because it’s not rule-based software.” The emphasis on how data is used, and why it’s being collected will not go away. We must decide if progress for the sake of progress is more important than purposeful use of the data. However, this momentary halt might be the ethical roadblock needed for societal, not technical, progress.
This blog is part of an ongoing series that explores issues in AI ethics. Recent posts include Build Trust With the Right Level of AI Transparency and Grappling With Fairness in AI Predictive Models.
Subscribe to our free newsletter and get blog updates in your inbox.