Despite my university education in International Relations and Cyber Defense and Cybersecurity, the example of war is not what dazzles me the most. What keeps me awake is the case of autonomous vehicles, and for that, we’re going to play with some philosophical questions like those often posed by Michael Sandel, a Harvard professor[67].
Suppose you are the driver of a trolley, and your machine is running on the tracks at 60 miles per hour, and at the end of the tracks, you see five workers working. You try to stop the trolley, but you can’t; the brakes don’t work, and they don’t hear you. You feel desperate because you know that if you hit the five workers, they will die. Then you realize that to the right, there is an alternate track, and at the end of that track, there is one worker working. Your steering wheel works, so you can change the direction of the trolley if you want, towards the alternate track, killing one worker but avoiding the death of five.
Now our game begins. What would you do? Would you continue on the predefined path, killing the five workers, or would you change tracks and choose to kill only one? Whatever decision you make, people will die. There are no other options or possible scenarios in this exercise.
After asking this question to several people with different backgrounds, the vast majority, almost all, declare that they would turn the steering wheel so that only one worker dies. In my opinion, the logic of this answer is correct. Why kill five when you can kill only one when fatality is inevitable?
Here the principle of utilitarianism just ruled, a theory founded in the late 18th century by Jeremy Bentham. It establishes that the best action is always the one that produces the greatest utility for the largest number of individuals involved. It makes sense. Doesn’t it?
Now, let’s analyze the next assignment from Professor Sandel:
Let’s imagine a new case. Now you are not the trolley driver. You are a spectator standing on a bridge overlooking the trolley tracks, and below, on the tracks, the trolley is coming, and further ahead, there are five workers. You know the brakes don’t work, and the trolley is about to hit the five workers and kill them. I repeat, you are not the driver. You feel desperate. There are five people about to die. Suddenly, you notice that next to you, leaning over the edge of the bridge’s railing, there is one more person. You could push them. This person would fall from the bridge onto the tracks and block the trolley’s path. Needless to say, this person would die on the spot, but the five workers would be saved.
What would you do now? If you want, take a minute to think about it and answer yourself before continuing to read.
Just a moment ago, we accepted the principle of the greatest utility for the majority of people. If we follow this theory to the letter, we should have chosen to push the person onto the tracks and save the five workers. However, in my analysis, very few people would make that decision.
This time, there are different factors compared to the first case. For starters, we are not directly involved in the event. We have the option to choose whether we belong to and interact with it or not. But that’s not all; in the first example, there were two clearly identified groups of people, and one of them was going to die with absolute certainty. In our new scenario, a new actor enters, unrelated to what is happening, a person who might simply be enjoying the scenery, waiting for someone, or enjoying the fresh air. This person was not destined to participate in the series of events about to unfold, but we, with our action, can alter their life, or rather end it, and save the other five people. Of course, by becoming killers ourselves. Small detail.
One could also argue that in the first case, the lone worker was precisely on the tracks, justifying it, while in this case, we can say that this person was on the bridge, and thus justify it. Their physical positions do not alter the fact. Neither of them chose to sacrifice themselves; someone else made the decision for them.
If you already feel confused by your own positions, wait because the exercise continues:
Let’s forget this case for a moment and imagine something different. This time, you are a doctor in an emergency room, and you have five patients who were in a terrible trolley accident: four of them have moderate injuries, one has severe injuries, and you have to choose between spending all day caring for the severely injured victim, but during that time, the other four will die from not receiving the necessary attention, or you can care for the four and heal them, but during that time, the severely injured person will die. How many of you would choose to save the four patients now that you are the doctor? How many would save the severely injured patient? Very few people. I assume for the same reason, one life versus four lives.
Now consider another similar case. This time you are a transplant surgeon, and you have five patients in desperate need of an organ transplant to survive. It turns out a trolley hit five workers! Can you believe it? Anyway, one needs a heart, one a lung, one a kidney, one a liver, and the fifth a pancreas transplant. You have no organ donors available. You are about to watch them die, and then you remember that in the adjacent room, there is a healthy man who came to the hospital for a routine check-up. He is taking a nap, and you, hypothetically speaking, could go quietly and remove his five organs. That person would die, but you could save the other five. Would anyone do it? Would you do it? Of course not.
We just faced moral situations that quickly presented us with internal contradictions. In the first case, we chose to consider it morally correct to save four lives even if one dies, and that decision arises from our consequentialist moral reasoning, which, as its name explains, places morality in relation to the consequences of a given act. Then, by slightly changing the variables, our way of reasoning also changed, and we leaned towards more categorical moral reasoning, starting from certain absolute moral values such as not killing others, implying certain rights and obligations regardless of the consequences, like letting five people die.
As an exercise, it might have been fun to face those moral dilemmas in the privacy of our minds, without being judged by others, and that’s partly the role of philosophy: to disturb us, to make us uncomfortable with things we already know or take for granted, all to expand our understanding and perspective of ourselves and the world around us. It’s true that these questions have been debated for some time, but the mere fact that they persist on the agenda seems to suggest, as Professor Sandel says, that although they are impossible in one sense, they are also inevitable, and the reason we cannot set them aside is that we frequently face similar questions and antagonistic answers according to each of us and the occasion that accompanies us. That’s why we are compelled to carry out this moral reflection, even if we encounter positions we hadn’t imagined. Skepticism, on the other hand, is not an option, as a world dominated by AI will not be exempt from political and legal controversies involving philosophical questions similar to those we just posed.
A question that is repeated too often regarding autonomous vehicles is what to do in a critical situation, whether due to misreading present information, a device failure, a pedestrian appearing abruptly when they shouldn’t, or whatever. The question is, what should the vehicle do? Protect the pedestrian, the weakest link in this equation, or save the lives of its passengers?
This question can quickly refer us back to the previous cases and lead us to evaluate how many people are in the car and how many pedestrians could be injured by our vehicle, but I would like to delve a bit more into this. Perhaps the answer to such a question should be resolved by the population as a whole and not by a group of easily influenced bureaucrats. But that would automatically raise another question. In the hypothetical case that life presents us with one of the situations we just described, would we really act as we think we would? Although most studies indicate that in the first example, more people would choose to save five lives, in real life, the only experiment conducted in this regard showed surprising results.
Yes, you read that right; someone conducted this experiment. Michael Stevens, a well-known science communicator on social networks like YouTube, analyzes different aspects of human behavior and the brain. To do this, in one of his episodes, he replicated the train experiment with the six workers divided into two different tracks. Before you get scared, let me clarify that no one died. It was an immersive simulation with people who didn’t know they were part of a social experiment. People were invited to a railway control center, where they were taught to move the train tracks, and then, through an excuse, they were briefly left alone in that control room and proceeded to see cameras showing different tracks, which were actually pre-recorded videos, depicting a train approaching and six workers divided on the tracks, five on one side and one on the other. If you want to see the experiment, it’s available, with Spanish subtitles, in the following video code:
The Greater Good – Mind Field[68]
You can test your decisions on The Moral Machine, a platform that collects human opinions on the decisions that Artificial Intelligence may have to make in the future, such as in the case of autonomous vehicles.
[67] Harvard University. (2009). Justice: What’s The Right Thing To Do? Episode 01 “THE MORAL SIDE OF MURDER” [Video]. Retrieved May 10, 2021, from https://www.youtube.com/watch?v=kBdfcR-8hEY.
[68] YouTube Originals. (2017). Mind Field S2 – The Greater Good (Episode 1) [Video]. Retrieved July 7, 2021, from https://www.youtube.com/watch?v=1sl5KJ69qiA.