The idea in a nutshell : There is a now familiar idea : Self driving cars are going to crash. In some circumstances, they will have to ‘decide’ whether to hit person / group ‘A’ or person / group ‘B’. Despite the apparent ethical complexity of this situation, there does seem to be a readily available answer on who to the car should hit. Quality Years Of Life (QALYs) is the metric used by the NHS ( the UK’s National Health Service ) to determine the cost per drug year they will provide to sustain patients. Crashing driver-less cars could use existing technology and the same formula to determine into whom the vehicle should drive.

Getting clear on the question

Automated ( driver-less ) cars will have to crash at some point. At that point, they may have to decide on whether to protect the occupant of the car or someone outside it. They may even have to decide between saving the driver or saving a group of people outside it – for example, on a pavement.

This idea has popped up for a couple of years now and appears more and more frequently. It’s even reached as far as the Huffington Post. Unfortunately, they offer very little insight on the decision which should be made.

The Trolley problem / Should you kill the fat man ?

In structure, the ‘Self Driving Car Kills People’ problem is a philosophical argument which has been around for some time. I thought it was interesting in itself when I first read it. A quick Google of ‘The Trolley Problem’ or, from a more confrontational perspective, “Should you kill the fat man?” will give you the details you need to confront the ethics involved.

In essence, in either version, you are given a series of imaginary scenarios where a trade off between the lives of different groups of people is made.

Most of the economists I know would answer these questions in a largely utilitarian way, prioritizing more life over less. But not everyone does – which is why it’s an interesting question.

The different answers people give to The Trolley Problem

The amazing thing to me, when considering this, is the range of responses that people give to these scenarios. I thought everyone would say the same thing. Maybe it’s this number of different ethical views which has caused such a public consideration of the self crashing car. I actually don’t even see the difference between some of the questions posed in the Trolley problem but obviously, others do. For example :

Question 1 was : Train driver can hit a button to exchange the death of 5 for the death of 1. Should he do it ?

Question 2 was : You can push a (fat) man on to a train line to save the lives of 5 people. Should you ?

These are the same question to me. I guess the difference is how close the individual making the decision and taking the action is to the effect. The result is the same in either circumstance.

From what I read, I think it’s fair to say that most doctors would answer this question differently from me.

The Hippocratic Oath does not contain the phrase ‘first do no harm’ as guidance to doctors. But, it appears form the Wikipedia entry I linked to there that most Doctors would probably agree with the sentiment (of first do no harm) and would try and avoid being involved in a person’s deliberate killing. There are of course exceptions. It is far from morally clear matter.

The brutal maths of deaths and the driver-less car

The US department of transportation says 94 precent of car crashes come from human error. It is also said that driver-less technology will ‘drastically lower’ if not eliminate these deaths. The UK government believes driver-less cars will save 25,000 lives a year.

In knowledge of that maths, my view is, we are compelled to implement driver-less cars as quickly as possible. If, as a result of doing that, we face some moral or ethical challenges then we should confront them and adapt the ‘thinking’ of the cars so they crash in a way we approve of. Not acting costs lives in significant number.

UK life calculations by the NHS

In this context, it might be useful to think of the exiting question which explore similar subjects. The Economics of saving life :

Drugs are used or not used, depending on ‘cost effective thresholds.’ This might sound different to the Trolley game but it isn’t. The most fundamental idea in economics is that in a world of unlimited demands and limited resources, rational decisions must be made to maximize what we consider important. In the NHS, there is a budget and it must be used to maximize the number of ‘quality life years’ the UK’s citizens benefits from. In the Trolley Game we are trying to do the same thing.

This news story explains how the NHS prioritizes the drugs it will provide and to whom they are provided. It points out that 22% of cancer drugs were not adopted by the NHS – despite their ability to increase the quality of patients’ lives. The metric optimized by “NICE” ( the body deciding these things ) is ‘Quality Affected Life Years’ or QALYs. Their threshold as a working benchmark is 30,000 GBP per QALY.

Why couldn’t we use QALYs to decide who the car would crash in to ? In such circumstances as the car might kill a different number of people in more than one scenario, face recognition and matching could be used to determine the age and existing health of those involved and a trade off conducted. The decision resulting in the highest QALYs should be taken.

Cost benefit of crashing driver-less cars

My suggestion that we begin before ironing the finer details of all this is utilitarian in itself.

It is also worth considering the risks that driver-less cars introduce too. The crash statistics which suggest people will be saved by driver-less cars avoiding more accidents do not consider the very real threat of driver-less cars being hacked. (Although in the most famous example, the hijacked car was driven by a human)

Like all technology, driver-less cars introduce benefits and risks. So long as the benefits outweigh the risks we should do it. In my view. But then I am not a doctor.