Tag Archives: autonomous vehicle

Programmed to kill! Autonomous vehicles and decision making.

Imagine letting your car make the decision to kill you. With the increasing popularity and improvement of autonomous vehicle (AV) technology, driverless cars will be publicly available before we know it. But how do they work and how comfortable can we be letting a vehicle make decisions for us?

There is a major misconception that AVs are pre-programmed with tons of intricate and conventional “if-else-then” guidelines for every situation a vehicle may encounter, as well as situations akin to the trolley problem. For example, if a child and a senior citizen are suddenly on the road, then the vehicle would hit the one with the lower chance of injury or perhaps the one with the most life left to live.

However, AV technology is not based on the ethics of driving. In fact, AV systems rely heavily on artificial intelligence and machine learning abilities to make informed decisions and discern its surroundings, just like a human driver.

The most common machine learning algorithms that are being used in AVs are based on “object tracking.” The purpose of these algorithms is to improve the accuracy of identifying and distinguishing between objects.

A core problem facing these algorithms is profiling of an object, i.e. whether it is another vehicle, a pedestrian, a bicycle, or an animal. The answer is a complex machine learning or pattern recognition algorithm that is given many images containing objects.

How a self-driving car might classify objects to make decisions. (Source: Iyad Rahwan, MIT)

Such an algorithm inspects the images and guesses the kind of object in each image. Logically, most of its initial guesses will be wrong and the algorithm modifies its internal parameters or parts of its structure based on the initial mistakes and tries again.

This process occurs continuously, discarding changes that reduce the algorithm’s accuracy and keeping changes that increase its accuracy until it correctly classifies all images. When the algorithm is shown new images, it will classify them with high accuracy. By this time the algorithm is said to have “learned.” The algorithm can then evaluate its surroundings and make a calculated choice about how to proceed.

Now back to the question at hand, how comfortable can we be letting a vehicle make decisions regarding death? I’m not sure how comfortable I would be letting a computer make a choice for me where the consequence could be death. On the other hand, I’m not sure how confident I would be in my own ability to make such a decision. The video below discusses the social dilemma of self-driving cars.

(Source: Science Magazine, YouTube)

When you strip away the bias, and purely focus on the logistics, i.e. the decision that will lead to the greatest good, perhaps an algorithm may be the best decision maker.

After all, evidence suggests that 90% of vehicle collisions are the result of human error. By removing the human element from driving, motor vehicle accidents would significantly decline thereby making roadways much safer.

We’re still a long way from allowing fully autonomous vehicles to take over roadways, but it is worth thinking about how the vehicle might make decisions where ethics and morality would normally play a huge part and how comfortable we might be letting an algorithm decide.

By: Ami Patel