Adventures in Babysitting: Why AI Ethics Matter

Putting down her mug of Matcha tea, my colleague scrutinized me with a skeptic’s piercing gaze. She even threw in a dramatically raised eyebrow for effect. “So, AI ethics… Didn’t ethics matter before artificial intelligence?” she asked. “Let’s be serious,” she continued. “We work in the domain of customer experience, and that means contact centers. I am sure you don’t need me to list all the ways contact centers can act unethically. Milk has a ‘best before’ date; ethics shouldn’t. Our enthusiasm for ethics should not change because of the latest technological bauble that’s caught our eye.”

I took a sip from my cappuccino and allowed myself a moment to digest her words. I wondered if she could be right. Then I took a deep breath, put down my cup and agreed with her: Ethics has been—and always should be—important. But important doesn’t mean the same thing in all instances.

I decided to start my answer in the form of a parable:

When my 11-year-old son comes home from school, he lets himself in, fixes a snack, watches a little TV and then gets started on his homework. He doesn’t need me to tell him what to do. And often, I’m in my home office on a conference call, so he needs to be independent. However, this doesn’t mean I leave him alone all night when my wife and I have an evening out. In that case, I ask the neighbors’ 18-year-old daughter to babysit. In addition, when my wife and I had the opportunity of a lifetime to travel to South Africa, we, unfortunately, couldn’t bring the kids with us. In that case, I wouldn’t ask a teenager to look after them for a whole week. Instead, my parents came down and stayed with them.

When I asked my colleague if this made sense to her, she replied, “Well yes, of course. But I don’t see what this has to do with AI.”

In each of these instances, I had to relinquish a level of control; I had to trust. I had to give up direct responsibility without forgoing my parental accountability. In the end, I would always be accountable for the well-being of my child—even though each case forced me to distance myself from the actual processes of care. And, of course, as each case moved me further away from these processes of care, I had to be more diligent in my choices of delegation. To not be more diligent would be a lapse in ethics.

My colleague sat up a little straighter and nodded; it was clicking. I continued to bring home my point.

One of the key characteristics of AI solutions—our latest technological bauble, as she put it—is that they allow us to delegate to a very powerful decision engine. These engines can learn and improve in their ability to make the right decisions. But this only forces us to be more diligent in our choices of delegation. And, I repeated, “To not be more diligent would be a lapse in ethics.”

So, you’re right to say that ethics have been—and always should be—important. But as we adopt technologies that let us delegate responsibilities to non-human actors, like AI-based decision engines, we must intensify our focus on AI ethics so that we don’t abandon our accountability.

It’s important for us to keep asking, to keep challenging and to continue to be skeptical. Unless we challenge each other’s thoughts and ideas, we can’t grow—we can’t move toward the truth. And without the dialog, we are only engaging in a lot of hand waving and virtue signaling that serves no one. It’s the struggle that keeps us honest and makes it all worthwhile.

This blog is part of an ongoing series that explores issues in AI ethics. Recent posts include The Social Benefits of Artificial Intelligence and Grappling with Fairness in AI Predictive Models.

Join the discussion on AI ethics.

Share: