Who an Automated Car Should Kill is the Wrong Question


The Trolley Problem, a thought experiment in ethics presented in 1967 by Philippa Foot, is a way of testing if our judgements are utilitarian. The problem proposes a trolley on a track is moving toward five people on the track unaware they are in danger, you are at a switch which could shift the trolley onto another track saving the five people but killing a single individual who happens to be on that track. The utilitarian would pull the switch saving the five at the expense of the one.

This in and of itself is a fine thought experiment, however when in 2016 we are asked to consider such a notion in relation to algorithms used to control automated cars we are asking the wrong question. In fact this highlights a complete and total lack of understanding in relation to the type of technology we are actually dealing with. We should not be asking who an automated car will kill, but rather how could such an event even arise or be allowed to arise.

If we are really going to consider automated cars then we need to first understand them and how they operate, we need to look at and gain at least a basic understanding of the systems controlling the vehicles.

These automated vehicles need three main systems to operate.

The first system is a GPS (Global Positioning System) to tell the vehicle where it is and give it the information it needs to determine its route from point A to point B. Like run of the mill navigation systems in use in cars today the GPS allows the car to be aware of traffic conditions along the route and speed limits.

The second system is a Differential Global Positioning System (DGPS), this system is made up of lasers, cameras and radar which give the car the ability to recognise real time changing situations that a standard GPS is not aware of. The DGPS recognises construction, pedestrians, new speed limits, other vehicles and obstacles.

Jurvetson Google driverless car by Steve Jurvetson CC BY 2.0
The LIDAR system of Google’s self driving car is mounted on the roof, rotates 360 degrees, contains a Velodyne 64-beam laser, is accurate up to 100 meters and can take up to 1.3 million readings per second. At 72 km/h or 44 miles per hour a car travels 20 meters every second.

Cameras are mounted around the car in pairs with a small separation to create a parallax similar to our own eyes, allowing the cameras to recognise depth and track distance in real time. The cameras have a fifty degree field of view and are accurate to 30 meters.

Radar systems positioned around the car are accurate at 200 meters, but have a narrow field, sonar at the front and rear of the vehicle is accurate to 6 meters and likely assists with parking and maintaining separation between vehicles at intersections and in parking lots.

The third system brings the previous two systems together and uses them to control the vehicle on the road. The Google car integrates all this data at a rate of 1GB per second, making it far superior to even the most skilled driver. A human driver simply can not correlate that volume of data per second, especially when they are busy playing with a phone.

This brings us back to the automated car killing someone.

Now that we have at least a basic understanding of the systems used to navigate these cars we can look at the Trolley Problem and consider why we would even need to ask such a question of an automated car.

When you are talking about an automated car there are very few times when it is going to be “surprised” by a change in road conditions. Short of an individual jumping out in front of it from behind a tree or barreling through a blind intersection I can not really see how this could happen. Stopping distances play a big role in this question, a human controlled car on a wet surface travelling at 60 km/h would take in the area of 54 meters to stop, so inside 54 meters is the danger area for a human controlled car. However an automated car can stop under the same conditions in about 29 meters, almost reducing the danger area by half. Now consider all the scanning equipment on an automated vehicle and we see it becomes increasingly difficult to find such a car in a situation where it would need to choose between killing more or less people. An automated vehicle drives to the conditions, it won’t turn a blind corner in a city a 60 km/h, probably more like 10 or 15 km/h giving it ample time to stop if people happen to be on the road.

If we want to say an individual or group of people has wondered onto a motorway then why would we expect an automated car to kill its passengers while a human driven vehicle would likely fail to react in enough time to do anything of substance. Consider again the speed with which automated cars react, if we are going to say an automated car has only enough time to choose to spare the people on the road at the expense of the passengers in the car, then a human driven car will certainly waste the people on the road.

What we actually find we are looking at is the impact of humans or non automated vehicles on the automated vehicle. These are the unpredictable variables when it comes to automated driving and this was highlighted on May 7th 2016 when a Tesla car failed to recognise a turning tractor trailer on the road, drove under the trailer making contact at the windscreen killing the passenger (driver) of the car.

Now this highlights the failure of the car’s systems to detect the truck, the failure of the driver of the car to pay attention to the road as stipulated by Tesla when using the autodrive system, but more importantly and what we should all recognise is it shows us that the real danger for automated cars comes from vehicles which are not automated. Had the tractor trailer been automated it may have waited to cross the path of the Tesla until it had gone by, or it may have been able to communicate its position to the Tesla which would have resulted in the Tesla breaking to avoid the collision.

This aforementioned incident has resulted in the National Highway Traffic Safety Administration starting an investigation into the autopilot feature. While slightly off topic here I think it important to point out the “autopilot” feature of the Tesla is little more than a glorified cruise control, the “driver” of the vehicle or human occupant is still required to pay attention to the road and have their hands on the steering wheel while the autopilot is in use. To restrict the use of this technology, or remove it from cars, would be foolish. If we took such an attitude towards all vehicle accidents cars would in all probability be outlawed, simply because the vast majority of people are atrocious drivers with almost no sense of responsibility while behind the wheel of a car.

It is worth noting the Association for Safe International Road Travel shows us that in the United States alone 37,000 people die as a result of road accidents every year and globally that number jumps up to 1,300,000 people annually or 3,287 deaths per day. Of that number we can now add 1 death as a result of an automated car. One death, which to be sure is still attributable to human error because the system used in the automated car requires the attention of the human occupant, it is not a fully autonomous system.

Every time you see a car speeding, a driver arrested for failing a breath test, the running of a red light, inattention while behind the wheel, a car accident, or drivers distracted by phones or other gadgets you should instantly think automate driving.

Back to the point of the utilitarian question of who to kill. Do we want to create algorithms which will be designed to choose who to kill or should we look to make roads safer by automating as many vehicles as possible? At the very least we could move to equip all cars on the road with the same sensor systems as automated vehicles so they can all communicate with each other. We should not be looking to choose who dies but instead finding ways to create a transport system in which those decisions are redundant, because nobody needs to die.

Feature Image
Driver Free Car by BP63Vincent CC BY-SA 3.0

Comments