Self-driving vehicles: an ethical dilemma

A speeding vehicle is driving down the road. A young child runs into the road. The driver has a quick decision to make – hit the brakes hard and swerve to avoid the child and risk your own life and safety, or risk the life of that child on the road.
If I ask you this question, you probably have an answer which is completely logical to you, but not every person would answer the same. Personal safety and the safety of others are important to all of us, but there’s not always an easy answer when you’re forced to choose between the two.
And what if this car was a self-driving vehicle. How does it make the right decision in a split second and cause the least harm possible. It’s an ethical dilemma, with no 1 right answer. And whatever the vehicle decides to do, be it “right” or “wrong”, whose fault is it?

An important question

A big question around many complex AI technologies is the question of various ethical dilemmas. Be it self-driving cars, face recognition security systems or data privacy issues, increased technological advancements and information access make our world more and more connected and complex. Ethics are already a key topic for many people regarding new technologies, and it’s bound to become even more important as our technological capabilities increase.
Let’s go back to our self-driving car. How would you solve this issue? Endanger yourself, or endanger others?
We would all love there to be a “yes or no” straightforward answer, but the truth is, there is none.
Some would say the job of that vehicle is primarily to keep the driver safe, and others would disagree. This is exactly why the issue of responsibility here is so complicated. AI can perform amazing things and process vast amounts of data within seconds, but it doesn’t have a conscience. We are the ones who program the “right” or “wrong” decisions into it, but if right and wrong don’t mean the same to everyone, how do you then program a self-driving car that’s supposed to be used by millions of people with different moral and ethical views?

Differences in the understanding of ethics

Self-driving vehicles are being developed in many countries by different companies, and they don’t all have the same ethical standards. In a survey on machine ethics, named “Moral Machine” (https://www.nature.com/articles/s41586-018-0637-6) which was conducted on 2.3 million people from around the world, one thing became obvious – many of the moral principles that guide a driver’s decisions vary by country. Ethics are not a universal value we all share. They vary between individuals, but also between countries and cultures. When asked questions hypothesising different situations, individuals from certain regions or cultures had different decisions based on whether the pedestrian was young or old, female or male, whether it was a group or just an individual, but also choices connected to the economic and political stability of certain regions.
Even though there are organizations and groups concerned with the study of ethics in self-driving vehicles or other AI technologies, there is still no uniform worldwide ethical code every human agrees with, and there might never be one.

A complex reality

The reality of advanced AI technologies is a complex one. We created these machines and made them intelligent, but we don’t really want to take responsibility for their actions. These questions are extremely important, and the ethical side of AI should come first in every conversation about future AI development.
If more is done to discuss these issues and include as many people as possible into the conversation, there will be less risk for new products and technologies being launched with serious ethical implications and risks.
Technology is amazing, it helps us do so many things and makes our lives more comfortable. AI can help keep us safer, make our daily routines work smoothly, make our workload easier, but it can also lead to new problems and issues. We need to be mindful about how we continue developing technologies and how we best use them.

Resources