Machine Intelligence and the Autonomous Vehicle ‘Trolley Problem’

Alfie Dennen
9 min readJul 5, 2016

--

On the heels of the first human fatality in an autonomous* vehicle, I’ve been thinking about how the automotive industry must address the framing of culpability and legal protection for both passengers and manufacturers in the event of accidents, and our individual ethical responsibilities as car owners. This essay I hope adds to the debate by looking at our own ability to be part of autonomous decision making on an individual basis, simplifying the challenges for designers and lawmakers.

Tesla have progressed more on setting precedent and informing lawmaking than any other manufacturer, not least because Tesla vehicles are constantly communicating with Tesla servers, providing a second by second record of all events. This first fatality however means that finding a way to legislate and enshrine culpability into law for crashes involving autonomous vehicles should be top of the agenda, and that means, amongst other things, finding a practical solution to the trolley problem.

The trolley problem is a thought experiment in ethics often used to frame this debate, here’s a quick summary:

“There is a runaway trolley barreling down the railway tracks. Ahead, on the tracks, there are five people tied up and unable to move. The trolley is headed straight for them. You are standing some distance off in the train yard, next to a lever. If you pull this lever, the trolley will switch to a different set of tracks. However, you notice that there is one person on the side track. You have two options: (1) Do nothing, and the trolley kills the five people on the main track. (2) Pull the lever, diverting the trolley onto the side track where it will kill one person. Which is the correct choice?”

There a great many articles on the topic as it relates to autonomous vehicles, but there is one aspect to the ethical thinking that bounds this debate, something so simple that I’m inclined to think that I’m being either sentimental or overly simplistic — it’s the human ability and tendency to compromise.

Simplifying to the Tunnel Problem

You are in an autonomous vehicle and it suddenly has an end-of-life decision to make: a child runs into the path of the vehicle. Either the vehicle swerves to avoid the child but will kill the vehicle’s occupant(s), or go straight ahead and kill the child but save the occupant(s). There is an inference in the way the problem is stated; that we should imbue Machine Intelligence (MI)** with a decision-making ability which takes into account an abstract ‘value’ of the drivers life vs that of the child’s. That said, when a poll by the Open Roboethics initiative asked drivers what they believed should happen in the situation above the following results emerged:

  • 64% of participants said the car should continue straight and kill the child
  • 36% said it should swerve and kill the passenger.

When asked who should make the decision

  • only 12% felt the designer/manufacturer should make it
  • 44% felt the passenger should make it, and
  • 33% thought it should be left to lawmakers
Courtesy of Roboethics initiative

So whilst there is a strong drive towards decision making based on self-preservation, the majority of those polled (56%) devolve responsibility of that decision to the designer/manufacturer/other or lawmakers rather than the occupant, indicating that a level of paternalism is desired when existential decisions need to be made.

Considering the notion of ‘value’, the inference is that it can be calculated. It implies that should we want MI to make end-of-life decisions for us, and that those decisions should be based on legislation with regard to what is best for the individual and society at large (as indicated by the desire for paternalism noted above). Value in this context becomes a weighing of not only ethical considerations, but also economic factors.

Do we really want a class of intelligences (and those designers/lawmakers determining their rulesets) governing end-of-life decisions to use encoded ethics (reliant on a given culture’s mores) and economic factors to make decisions as profound as the death of a child? Or your own? Arguably, we do this already: we give our elected representatives the ability to make existential decisions for the lives of those in our healthcare systems on our behalf. In the UK’s NHS, those decisions are very often determined by the cost of the treatments available. If we imbue our MI’s with our culture’s ethical mores and give them agency to make decisions that lead to a loss of life, is it really so different?

A different approach to the problem

The reason someone might jump into a river to save another human being at great personal risk has nothing to do with a rational decision making process; it is an evolved trait in primates rendered from kinship and dispersal, a drive so deeply ingrained that often people cannot say why they performed an act of altruism, just that they had to. And this is the heart of the tunnel problem above; if we are able to devolve decision making to MI’s which may lead to the death of ourselves or another, imbuing those intelligences with our own innate and evolved responses, even if that means an edge case of our own death or injury, is a logical answer. It’s also a very human answer. To imagine otherwise is to accept a world where an MI is authorised to determine the value of a child’s life, in part defined by an estimate of future contributions to their countries GDP, or level of insurance; a precedent I imagine few of us would be entirely comfortable with.

So on one hand we see evidence in our behaviour of altruism as well as self-preservation and on the other the desire to devolve existential decisions to our elected officials. The difference here is immediacy. We may try to be rational about hypothetical situations, but when confronted with the situation, we react instinctively

Compromise as Fuzzy Logic

In our tunnel problem a driver would arguably respond instinctively, swerving out of the way and killing themselves. The plethora of sensors and control of a driverless cars actuators and components at an MI’s disposal however is akin to a situation where the driver had similar knowledge of the implications of a given choice and time to act thoughtfully. Assuming a human driver with the same level of control as an MI over the vehicle, and accepting our innate drives towards both altruism and self-preservation, what might you or I do? Arguably we would react instinctively and in the moment, and a human’s instinct is to first attempt to compromise as best we can with whatever control over the situation we have. This may appear to be a kind of Kobayashi Maru sidestepping of the intractability of the tunnel problem, but the choice between one’s own or a bystander’s death is in itself a very personal and deep ethical compromise.

Compromise is a fuzzy sort of thing; it’s subjective since everyone involved has different ideal outcomes, from marginal gains through to the existential. Our ability to compromise is what makes society possible, it’s what makes great acts of altruism possible, arguably it is the driving force of humanity. In an imperfect system where the above tunnel problem is a possibility it seems a logical and ‘human’ thing to do is to have the choice to imbue our MI’s with the same drive, since if we decide on a utilitarian approach alone we set the irreversible precedent of a life’s value calculated in economic terms. What would Kant do?

All of that said, I’ve been trying to find a quote from the PR person at Wendy’s Burgers when they opened and abruptly closed in the late 90’s but can’t, so I’ll provide a paraphrasing: On why Wendy’s closed in the UK after less than a year of operations:

“We asked our potential customers what they wanted and they overwhelmingly said they wanted salad bars and healthy options. Turned out they lied.”

This presents a chilling problem with the notion that given autonomy and agency people will make moral choices; what people say they want rationally is often in conflict with their choices and desires in the moment.

The Design Challenge and a Possible Solution

The biggest part of the design challenge seems to be in giving the driver agency over the outcome. However, looking at the poll above a majority of drivers would prefer either the designers or lawmakers to make the decision for them. Jason Millar astutely addresses many of the points raised here through the lens of design methodologies which take the driver’s desires into account at key moments in the design process. In following through Millar’s notion that there should be an interplay between driver and designer/lawmaker, a clear solution surfaces — lawmakers and designers provide options.

“If we consider the owner the morally appropriate decisionmaker in this driving context, and it seems we can since her life is directly at risk and she is not at fault (has not broken laws leading to the situation, has not erred, etc.), then hers is the autonomous decision that ought to be represented by the proxy, not [the designer/manufacturer]. From a design and use perspective this analysis suggests that [autonomous car] owners ought to be offered a choice of settings that would delegate the appropriate decisions [on] how to respond in situations like the one described.” Jason Millar

What if when you buy an automated vehicle, part of the process is to choose what setting your car is on: occupant preservation, fuzzy compromise, or bystander preservation, with the ability to scale between them (alongside an option to go with the governing bodies recommended setting).

I would really like to see what the answers in a further survey of drivers would be when asked how they would set the vehicles behaviour (hint, do the survey). As Millar points out, the critical thing that this solution delivers is that it removes the thorny problem of paternalism as a design pattern for autonomous vehicles, whilst allowing the basis for legislation which informs design based on a level of individual autonomy. This is an elegant approach to a solution as it reflects the deeply personal moral decisions each one of us might make differently in end-of-life situations.

Some final thoughts

Whilst this may be a possible solution for an individual purchasing an automated vehicle, what about delivery drivers with a company car, or long haul truckers using their employers vehicles? In a world with vastly fewer crashes, would a long haul trucker accept an altruism clause if given a higher salary? Would a highly paid executive‘s car default to self-preservation as a matter of course?

I apologise that this essay throws up more questions than it answers, but I hope you agree that’s a good thing since it’s only by asking the right questions we ever get to solutions at all. In a world where automated vehicles have reduced road fatalities by up to 90% we are talking about edge cases here, but they are absolutely critical edge cases to solve now, or we’re all going to be holding onto our steering wheels and causing human error accidents for the foreseeable future***.

Oh and please do take the short survey :)

* “It is believed that this is the first fatality in any self-driving car in autonomous mode. That would be Level 2 self-driving, defined as a vehicle using multiple sensors and driver assists — typically adaptive cruise control (ACC), lane centering assist (keeps the car centered), and blind spot detection (with return to lane). That’s on a scale of 0 (no features) to 4 (self-driving, no driver required, and there may not even be a steering wheel).” — Extremetech.com

*I’m going to use the term Machine Intelligence (MI) in this article rather than A.I as A.I is such a loaded term. I’m using MI to define it as something seperate from how we define and use our intelligence; ours is an evolved set of responses and methodologies whilst MI acts according to how we instruct it.

***PEBSWAC (problem exists between steering wheel and chair).

--

--

Alfie Dennen
Alfie Dennen

Written by Alfie Dennen

Product Strategist, Boardgame Designer

Responses (4)