Your Genesys Blog Subscription has been confirmed!
Please add genesys@email.genesys.com to your safe sender list to ensure you receive the weekly blog notifications.
Subscribe to our free newsletter and get blog updates in your inbox
Don't Show This Again.
The growth of artificial intelligence (AI) throughout organizations necessitates the need for AI ethics. As AI becomes more independent in decision-making and automation, your ethics strategy takes center stage. It should support innovation and maintain the right level of oversight and control.
Ethical AI refers to the responsible design, development and deployment of AI systems in alignment with business values and societal expectations. Genesys defines ethical AI as a practice that safeguards businesses by applying AI with a purpose, adhering to data standards, mitigating bias and upholding privacy.
In this article, we explore the need for ongoing synergy between product development and privacy officers in a company’s AI innovation strategies.
Without a robust ethical AI strategy, companies can introduce risks, such as public mistrust, monetary loss, litigation and missed opportunities for innovation. In fact, according to a McKinsey survey, 70% of high-performing organizations report difficulties integrating data into AI models, often due to gaps in regulatory oversight and compliance challenges. Addressing these barriers is essential to staying competitive and embracing innovation in the evolving AI landscape.
The challenge is that AI doesn’t have a moral compass. So, it’s important that human vigilance is part of the innovation process of how the technology is built and implemented in products or services. This means the rush to innovate shouldn’t sideline guidance on how data is acquired to train AI, how and where the generated results will be used, and what permissions for use will be required.
From a lifecycle management perspective — from data acquisition to its deletion — humans must monitor who has access and what type of access. Additionally, the collection and use of data should be based on need and minimized accordingly.
AI ethics should go far beyond the front-end decision-making process to continually monitor these goals while supporting ongoing innovation. One important reason is the issue of potential drift. This occurs when an AI model’s performance degrades over time and the data it encounters deviates from the data it was trained on. For example, drift could create bias in output data that you were unaware of at the front end of an implementation.
Looking at outcomes reveals how AI is being used. Establishing guiding principles for AI ethics is critical. Effective management requires navigating AI ethics from multiple perspectives, particularly those of privacy officers, product development teams and business leaders.
– Customer experience in the age of AI, Genesys, 2024
High-performing organizations frequently struggle with AI governance because of the complexity of the global regulatory environment. While they intend to move forward, they often take steps back because of a lack of governance. This increases the risk of public mistrust, leading to skepticism about AI-driven decisions. There’s also the risk of missed opportunity from a lack of innovation due to operational roadblocks when trying to implement AI.
AI tools in the market are open-ended, meaning they generate responses not limited to predetermined options and can reflect biases in training data. It’s imperative that privacy officers understand how data is being collected and used, and what personal information is feeding into the tool. There’s also the question of purpose: Why is this set of data needed and what output do you expect from that?
Once the inputs and the outputs of data use are understood, the privacy officer moves to a risk assessment. This requirement under some EU regulations considers the harm versus the benefit to the individual and the company from the use of this tool.
Privacy officers must implement governance frameworks that are transparent and provide clarity on AI decision-making processes and data management that product development teams use. This approach mitigates risks up-front and enables fast and effective product innovation.
Product developers are typically excited about exploring the potential of new AI technologies. However, they’re often less focused on issues of data privacy. This can lead to ethical blind spots and roadblocks.
One common roadblock is having a blanket privacy statement for certain technologies, such as generative AI. Because generative AI can be the foundation for many use cases, it’s important to address each one differently to understand what the tech is doing.
For example, if generative AI creates a summary of a customer interaction, is a human involved in a review of its output? Or is it being used to help evaluate a customer for a loan? Those are very different in terms of customer impact and change, depending on what the generative AI is doing.
Evaluate each AI-driven process individually, based on:
The scope of AI usage: How is AI influencing business processes and customer interactions?
Human oversight: Does a human review AI-generated outputs, or are decisions fully automated?
Risk tolerance: The ethical and legal implications vary depending on AI applications.
Privacy compliance: How will the company balance evolving regulations while fostering innovation?
Many companies struggle to maintain the necessary transparency throughout product development because they can’t explain how their AI models generate outputs. At Genesys, our privacy office is closely aligned with product teams throughout the full development process.
Genesys developed a structured framework to safeguard customer and company data while adhering to regulatory requirements. This framework includes safeguarding the data with rigorous information security controls as well as several external audits that are built on privacy. Let’s take a look at the key tenets of this framework.
We encrypt data as it moves between systems. To allow even more control, we let customers bring their own keys if they want to be the only ones with access to transcripts and recordings. For example, for training AI models, we have an opt-in process if customers agree to share their transcripts to help some of our models. And that data is anonymized before it is used for training the models.
As a global company, we monitor many different industries and countries around the world to be aware of various laws and aim to ensure that we’re meeting all regulatory requirements.
As part of product development, we require annual compliance training that covers types of data and what’s considered personally identifiable information (PII), as well as acceptable uses of customer data. That’s because PII data is more than personal information, like social security numbers or biometric data. It could be how long agents are working and how many agents are on the system.
Natural language systems within AI inherently have bias with content from multiple humans, so we take steps to understand the risk of bias and how it’s created. Annotators from various backgrounds test our models to guard against models making unintended decisions, especially with the potential for system drift over time.
These models are meant to augment, give tips or feedback, for example, not necessarily make decisions in real time. That’s why we start early and monitor continuously.
AI is a power you can wield if you’re confident of what it’s made of — what makes it work, and what makes it break. Transparency is a defining feature of the Genesys AI ethics approach.
While some businesses might be hesitant to commit to transparency and reveal to customers how they work with AI, consumers today are becoming more comfortable interacting with it. In fact, in “Customer experience in the age of AI,” 37% of CX leaders surveyed said their organization proactively communicates how they’re using AI-related data.
AI ethics isn’t just a regulatory requirement — it’s a strategic advantage. Organizations that prioritize AI transparency, fairness and accountability can be better positioned to build trust, drive innovation and maintain compliance. It’s good business and good for your customers.
For more insights, watch our on-demand webinar and Q&A session “Putting ethical AI into practice: Principles and strategies for success.”
Subscribe to our free newsletter and get blog updates in your inbox.
Related capabilities: