Your Genesys Blog Subscription has been confirmed!
Please add genesys@email.genesys.com to your safe sender list to ensure you receive the weekly blog notifications.
Subscribe to our free newsletter and get blog updates in your inbox
Don't Show This Again.
Artificial Intelligence (AI), as we know it today, has gone through many phases in the past 50 years — from when the first bot was created with 200 lines of codes in 1966 to AI being democratized in our homes with Amazon Alexa and Siri. As fascinating as those breakthroughs can be, we also have witnessed the multiple risks that come with AI. Fortunately, our economies are progressively adapting to AI through partnerships and regulations to ensure a sound and ethical deployment and use of AI-powered applications.
A Very Brief Timeline of AI
1966: ELIZA, first chatbot ever created was coded using 200 lines of codes.
2010: Apple creates Siri for iOS.
2014: Amazon Alexa is released.
2015: Proof of concepts and first uses of AI begin to raise awareness around AI ethical concerns.
2016: Google DeepMind defeats Lee Sedol in a Go Match in March 2016; large tech companies consolidate positions and guidelines; the Partnership on AI is created in September.
2018: As main regulations regarding data privacy come into place (GDPR is applied in May), companies position themselves in the ecosystem and create various committees; the Genesys AI Ethics Initiative launches in November.
2019: Steve Wozniak, one of the co-founders of Apple, shared that the Apple Card discriminated against women when applying for loans. This illustrates that, even though AI is becoming available to the masses, we haven’t figured out how to make it behave with social codes that we define as acceptable for the masses and in society. This is also a proof that, as more data providers enter the market, we need to examine the data that’s used to train the algorithms.
2020 and beyond: There is a continued need to be accountable and implement metrics and products around ethics in data and AI to identify and prevent biases.
More Partnerships Form to Prevent AI-Related Risks
Institutions: As AI-powered tools become ubiquitous across industries — chatbots and voicebots in customer service, image recognition in warehouses, autonomous cars — companies and public institutions are partnering to create safeguards against potential misuse of the technology.
The Partnership on AI, which involves more than 80 companies across all industries, or the Council on the Responsible Use of AI between Bank of America and Harvard Kennedy School, are just two examples of partnerships that were created in recent years. More partnerships will develop, and more regulations will be created in 2020.
Data infrastructure: Partnerships that are aimed at improving the quality of the data that will feed AI technologies, such as the Cloud Information Model (CIM) and the Open Data Initiative (ODI) continue to form. While these partnerships are designed to favor data that’s shared across industries — and players within a similar industry through data standardization — we also can expect more control from involved stakeholders. This increased control should create fewer biases and reduce the risks around security and privacy of data. They’ll play a major role in regulating standards across stakeholders in 2020 to deliver more ethical AI products.
Tools and Product Development Initiatives Mitigate AI Risks
Companies like Genesys are not only taking positions to tackle AI risks, they’re also implementing frameworks and tools to develop transparent and responsible AI products. After taking a firm position on AI ethics a year ago, Genesys recently shared details on how we’re executing on AI in an ethical way through product requirement documents (PRDs). We’ll also provide content to accompany agents in a worldwide technological transition.
In 2020, more will be done across industries for deploying tools and guidelines to build interpretable machine-learning products. The Ethical Institute has created an AI-RFX Procurement Framework to implement a machine-learning maturity model that follows the Responsible Machine Learning Principles for AI product development. Those frameworks not only tackle biases, evaluation and data-related risks, they also push teams to formulate a worker-displacement strategy to mitigate the impact of automation in the workforce. Training and sensitization among AI engineers — both internally and with third-party providers — will be key in 2020.
Large tech companies will also invest more in these processes and will make analytics products available in the market. For example, Facebook released Fairness Flow and Google released the What-If Tool to anticipate and automate the analysis of biases to enforce ethical product development in the industry.
Government Involvement in AI Ethics
Government regulations will go further into enforcing explainable and transparent AI. GDPR already mandates a “right to explanation” for all high-stakes automated decisions. This is one of the first legal steps toward a more ethical AI.
Governments will play a larger role in data and AI ethics, both to create the regulatory environment required for ethical AI to be developed — and to develop and share tools specific to government AI products.
External audits of AI systems will be required for all AI products in the near future. This will help alleviate any growing fears about this technology.
To stay up to date, follow the Genesys AI ethics blog series and join the conversation.
Subscribe to our free newsletter and get blog updates in your inbox.