Milton Friedman’s car antivirus

 

When talking about autonomous cars, it is known that these vehicles will not abruptly appear on our streets until the law permits it, so the question about whose life will prevail in the case of an imminent accident will surely be answered by political spheres sooner rather than later. It is also known that once the highest degree of full autonomy is reached, street safety will increase, and traffic lights will become practically obsolete once manual cars are eradicated, for two simple reasons: the first is that pedestrians always have priority, while the second reason is that cars will be able to automatically communicate with other vehicles around them to decide who goes first and measure their distances and speeds.

 

Once again, we must recognize that we are human, and as such, we are not perfect. We get distracted easily, we can fall asleep or suffer a heart attack while driving, we can even be distracted by sneezing, using the mobile phone, or misinterpreting signals from other drivers. We are prone to making mistakes; these things happen. Machines, on the other hand, do not make mistakes, or at least not as easily. They follow the orders they were programmed with and constantly feed on information to evaluate from one millisecond to the next whether to continue their current plan or if some external factor forces them to change those plans immediately, making accidents between autonomous cars very unlikely.

 

In the 1980s, the renowned economist Milton Friedman gave a controversial response to a question from a young Michael Moore who was in the audience of a TV studio where he was being interviewed[69]. The question revolved around a conflict with the Ford brand and its Pinto car model. This was a very popular car, but it had a big problem. The gas tank was located at the rear of the car, and in rear-end collisions, the gas tank was prone to explode. This was because the company had omitted to place a plastic guard against the gas tank, which in the event of a terrible rear accident, would cause the car to burst into flames. As a result, many people died, and others were severely injured. Many survivors sued Ford for this and took the company to court.

 

While the case was being discussed in court, it was discovered that Ford was already aware of this detail of the car but had conducted a cost-benefit analysis to determine whether it was worth installing the plastic guard.

 

The cost of this plastic guard was only $11 per car. Considering they estimated selling 12.5 million cars, this would result in a $137 million expense to improve the safety of their customers. They then calculated the benefits of making this investment and estimated that not doing so would result in 180 deaths, for which they assigned a monetary value to human lives. The chosen value was $200,000 per deceased person. They then calculated 180 people with severe injuries, which would mean $67,000 per affected person, and finally, the cost of repairing 2,100 vehicles at $700 each. All this amounted to $49.5 million.

 

In numbers, it was more economical for the company to pay those sums than to install the famous plastic guard in each vehicle produced or redesign the entire product.

 

Friedman, controversial as always, responded that it was not a matter of moral principles and that it was not correct to put an “X” price on a life, but that this still had to be done because we need to assign value to everything. On the contrary, he responded that consumers had the right, with all the information on the table, to buy this car $11 cheaper or choose greater safety for their lives and opt for another model or brand. According to his words, this was not a problem inherent to the company; it was part of the free market, and people could decide accordingly. In his view, the fundamental principle under discussion was whether we should have the freedom to choose how much to pay for our safety without the government imposing safety regulations on manufacturers since, after all, if all cars were lethal, no one would buy them. The famous invisible hand of the market.

 

Friedman even went a step further, stating that many people are not willing to pay much for their own safety, and some even pay to reduce their chances of living a longer and healthier life, referring to smokers. He argued that those who smoke are within their right to do so, no matter how illogical it sounds, as long as they do not affect third parties and know in advance that smoking reduces their life expectancy. Some pay little to smoke, while others pay a lot to get organic food. Irrationality at its finest.

 

I believe that the question from this young student to Friedman, and the subsequent response from the latter, raises a fundamental question for autonomous mobility.

 

Will governments establish regulations in which the pedestrian’s life always prevails? Will that be the “standard” mode, but a special fee will be allowed for the wealthiest to change this configuration in their vehicles, consequently increasing the value of their insurance policies?

 

What if this last action does not exist, but someone decides to “tweak” the configuration of their vehicle to change the odds in their favor? No software is unhackable forever. Eventually, people will figure out how to modify it for this purpose. For example, the electric scooter I use every day comes with a factory setting that limits its speed to 25 km/h, but on the internet, many free tutorials are offered to remove this limitation and use its full power, allowing it to reach speeds close to 32 km/h, which may not seem like much but is more than the 25 km/h many national laws set for these types of personal vehicles. Going back to autonomous cars, what would happen if it is discovered that a person involved in a fatal accident with another individual modified the regulatory settings? What would their penalty be? A fine? Prison? How long? Is it premeditated? What would happen if someone modifies their software to make the car go faster than allowed? Should the car auto-fine itself? Should the fine go to the owner or the car manufacturer who allowed a basic rule of their algorithm to be bypassed? After all, detecting a vehicle’s speed will be something the car’s central computer can do, as well as the sensors of the vehicles around it.

 

What will happen in cases of involuntary hacking of our autonomous mobility systems? Is the owner responsible? The insurance company? The 5G connection provider? The car manufacturer? The company that provides the car’s antivirus software? What?! Yes! You read that right; I said antivirus for cars. If there is a computer system and it is connected to the internet, sooner or later, antivirus software will emerge for them, even if they call it a “security system” or something else; we will be talking about antivirus with other words.

 

What happens in the case of an imminent collision between two autonomous vehicles? Ideally, this should not happen and is very unlikely to occur. Even Elon Musk and other experts have answered questions about this, making it clear that while human attention and field of vision are limited, this is not the case for autonomous cars, which are basically computers on wheels. These computers, integrated with 360° vision and powerful processors, constantly analyze everything happening around the vehicle in milliseconds, so while it takes us several seconds and meters to recognize what is happening outside our vehicle, smart cars are seeing and analyzing everything as if they were moving in slow motion. In fact, if everything goes well, in an unforeseen event, the car should be able to brake in time, notifying its actions to the other cars nearby. But for the love of philosophizing, let’s suppose there is an unforeseen event that will lead to a fatality. Who should have priority? Should the car transporting more people have priority? Is it the same if a President is in one car and a family of four or an ex-convict is in the other? Could we use smart contracts on a blockchain where both cars communicate to execute certain clauses where one vehicle offers a predetermined monetary value in the form of tokens convertible to money to the person in the other vehicle to preserve their life, knowing that this will go to the descendants and relatives of the person in the other vehicle? Too drastic? Well, what if instead of that, and moving away from accidents, digital asset tokens were used for a vehicle to communicate with others to decide who has priority at intersections? You could pay other drivers to always have the priority to move forward and thus reach your destination faster!

 

Many questions, few answers, that’s what makes this debate interesting. Whether we like it or not, we must face these issues before it’s too late. The only thing I am sure of is that even if the entire fleet of vehicles available in a city is autonomous, we will still need to keep traffic lights to avoid falling into the dictatorship of the pedestrian, where anyone crosses at any given moment, anywhere, causing autonomous vehicles to stop constantly, making traveling in them cumbersome. Elon Musk has even mentioned that his Tesla stopped when shown a T-shirt with the “STOP” symbol[70], our equivalent to the red and white signs that mean “STOP” or “DO NOT PROCEED.” Can you imagine if a group of people decided to “confuse” autonomous vehicles and go out all at once wearing T-shirts with traffic symbols?

 

In recent years, we have seen how technology first emerges, and then the companies that apply it try to push for legislation in their favor. Politics today lags behind technological advancements, and to some extent, that’s fine, as we cannot constantly limit technological progress bureaucratically. The mere attempt would be absurd. Platform economies, like Uber, Rappi, Deliveroo, Airbnb, and others, are a clear example of this and have come to stay and evolve; but of course, in general terms, here, the lives of people are not at stake so directly, beyond various accidents that may happen.

 

Click here to read the next chapter 👉 
 


Click here to return to the Index 🔍 


[69] Radio Libertaria. (2012). Milton Friedman vs Joven Michael Moore (Caso Ford) | RADIO LIBERTARIA [Video]. Retrieved August 13, 2021, from https://www.youtube.com/watch?v=mC_X-Vco_3Q.

[70] Towey, H. (2021). Elon Musk joked about Tesla autopilot mistakes as the technology faces scrutiny: ‘I actually have a T-shirt with a stop sign on it. If you flash the car, it will stop’. Business Insider. Retrieved March 9, 2022, from https://www.businessinsider.com/elon-musk-joked-fsd-autopilot-mistakes-tesla-ai-day-2021-8.