Home News
Self-driving cars have created a real-life ‘Trolley Problem’

The dilemma with self-driving cars.

This article was originally published in The Washington Examiner. Read the original article

This is an opinion piece by Professor Sandeep Gopalan, Pro Vice-Chancellor for Academic Innovation and Professor of Law, Deakin University. The author's view and opinion may not imply or reflect Deakin Law School's view. 

Think quickly: You are a bystander witnessing a runaway trolley that is careening toward five workers. You have just two choices: One, pull a switch diverting the trolley onto another track, where one worker would be killed, or two, do nothing and let the trolley hit and kill the five workers.

This thought experiment — the so-called "Trolley Problem" — is an infamous moral dilemma popularized by the Massachusetts Institute of Technology moral philosopher, Judith Thomson. And it just got one step closer to reality with the death of Elaine Herzberg, the 49-year-old woman who is reportedly the first victim of a self-driving car on a U.S. public road.

The arrival of self-driving cars transferred the problem from law and ethics classrooms to the real world. Specifically, ethicists have been concerned about software in a self-driving car having to choose between different outcomes that would all result in death or serious injury.

Currently, we do not know if the Uber car in Tempe, Ariz., had to make a choice between hitting Herzberg or other bad options, such as dangerously swerving out of the way, endangering the driver, striking other cars or pedestrians, etc. Initial reports indicate that Herzberg suddenly entered the path of the car, which was traveling at about 40 mph. Police reports state that the car did not appear to slow down or stop. Why? Was there no time to process information about her appearance on the path, or did the algorithm calculate that swerving away would injure the car’s passenger or other people on the road?

The car — a Volvo — would likely be designed to make an emergency stop in exactly these sorts of situations. Hence it is surprising that it did not slow down at all. The National Highway Traffic Safety Administration’s investigation ought to tell us more about what happened.

The police have stated that Uber was likely not at fault for the accident. The chief of police in Tempe, Sylvia Moir, is quoted as saying: “The driver said it was like a flash, the person walked out in front of them. … His first alert to the collision was the sound of the collision.” The police claim that the video shows “it’s very clear it would have been difficult to avoid this collision in any kind of mode based on how she came from the shadows right into the roadway.”

Setting aside the police version, it is worth asking whether existing laws help address this problem. There has already been criticism directed at Arizona’s light-touch approach to regulating self-driving cars. Gov. Doug Ducey, a Republican, adopted a highly industry-oriented approach to regulation in 2015. This was modified slightly in March 2018, by an executive order recognizing that the state had "become a hub for driverless car research and development with over 600 vehicles with automated driving systems [undergoing] testing on … public roads." The order continues with the light-touch approach, expressing the bare minimum of common sense mandates: Self-driving cars have to follow all laws, operators have to submit a statement of compliance before testing without a human, and citations may be issued.

Clearly, the governor’s order is of no assistance in determining liability for this accident, or as guidance to those writing algorithms for self-driving cars. Other states have adopted more detailed legal rules; 33 states introduced laws in 2017 alone, many imposing rigorous permit, reporting, and data retention requirements. For instance, Michigan Act No. 333, passed in 2016, specifies that "during the time that an automated driving system is in control of a vehicle in the participating fleet, a motor vehicle manufacturer shall assume liability for each incident in which the automated driving system is at fault." Notably, even this law provides no assistance to the Trolley Problem, because it does not define fault.

Some state laws require a human to be present in the vehicle and to be able to assume control in the event of emergencies. They place responsibility on the human to avert disaster. This is folly, because the same laws often allow the human to be texting or watching videos while the automated driving system is engaged. To assume that a person who is thus distracted would be able to react in time is far-fetched. This accident shows us that even a person who is not impaired or texting could not prevent the accident.

Who should be responsible for decisions made by autonomous vehicles? Several surveys of people responding to variations of the Trolley Problem have illustrated the difficulty with determining responsibility and fault. Most respondents have no difficulty with utilitarian decisions that kill one to save five lives — except when that one life involves a close relative. But what if that one life is their own?

Clearly, much work needs to be done to inure people to autonomous vehicles and to the concept of liability for accidents that result from decisions made by algorithms. A poll by Reuters in January 2018 showed that two-thirds of Americans are uncomfortable with riding in self-driving cars and have questions about liability.

The many state experiments with self-driving car legislation make one thing clear: It must be left to the states to wrestle with these complex moral and ethical questions via the political and electoral process. Some states may decide that they cannot resolve the dilemma and leave liability in the hands of the manufacturers for algorithms that make hard choices. Others might adopt utilitarian or Kantian approaches or provide carve-outs. A one-size-fits-all federal law is not the answer.

Share
Posted in News