Just a few years ago, Artificial Intelligence was a field of action, discussion, and study in academic research and science fiction movies.
For most of the last century, major technological advances were driven by state efforts in Investment and Development, often linked, of course, to the development of their military capabilities.
While the idea is not to defend or promote any kind of militarization, this model did have a benefit for Nation-States: in addition to improving their defense and attack capabilities, governments were the ones who established the legal clauses on how that developing technology would be used.
Current military developments, commissioned by states, are aimed at protecting civilian lives, beyond obviously trying to better detect their enemies on the battlefield, which I believe, without defending wars, is appropriate if we want to avoid so-called “collateral damage” to civilians. I would love to live in a world without wars and violence, I’m sure you would too, but that’s not the case. We all know well the world we live in.
Today, military laws prevent weapons of mass destruction from being autonomous, so they must be controlled by humans before attacking the enemy. The clearest example of this can be the famous war drones that have become so popular today due to various movies and recent news. It’s true they are unmanned, but after all, there’s a person, sitting in some secret military base, controlling it remotely and deciding when to pull the trigger.
As stated in the United States Department of Defense Directive 3000.09, the world’s leading military power, the use of autonomous capabilities in combat zones with humans is expressly prohibited, one reason being that the technology still cannot positively discriminate between enemy soldiers and civilians with absolute certainty[59].
That said, on November 27, 2020, Mohsen Fakhrizadeh, who was in charge of developing Iran’s nuclear program, was assassinated[60] by a burst of shots fired from a vehicle located 150 meters away. There were no humans in the vehicle from which the shots were fired. Inside was only the lethal weapon that ended Fakhrizadeh’s life. Fakhrizadeh’s wife was less than ten centimeters away from him and was unharmed. The only fatalities were him and his bodyguard. Whether AI was used to generate these shots, or if someone controlled the weapon remotely, remains an enigma and is likely to remain so as the Nissan car from which the shots were fired imploded in a programmed manner after the attack, eliminating any evidence.
This opens a new debate since one thing is the regulations accepted by states, and quite another is the capabilities that technology can provide to a terrorist group or a criminal. Terrorists have never governed their behavior according to the law, and they won’t now either. The problem is that acquiring a missile or enriched uranium to cause a catastrophe has always been theoretically difficult and, at the end of the day, involved material things with chemical elements easily detectable at customs and airports. On the other hand, a computer virus or an AI program hypothetically designed to cause harm can be easily sent by email or even via WhatsApp or Telegram, crossing borders without raising the slightest alarm in national and international security bodies. Look at the example of Stuxnet, a computer virus that began spreading through a USB stick, not only infecting thousands of computers worldwide but whose goal was to attack important industrial infrastructures, especially those in Iran[61]. The virus quickly spread from one computer to another, gaining access through the network, reprogramming entire systems, and being updated remotely. It took control of the Natanz nuclear plant in Iran. The attackers didn’t get there by chance; they managed to take control of the centrifuges that allow uranium separation and accelerated them to such a degree of speed that they destroyed them. Who are we to deny that the next major war might not effectively start with a virus attacking a country’s critical infrastructure? If they could do this with a nuclear plant, what’s stopping cyber attackers from going after a country’s power grid, sabotaging air traffic control systems, or collapsing hospital systems, perhaps even in the middle of a pandemic?
A few years ago, a fake video went viral, created by the Future of Life Institute and Stuart Russell[62], dedicated to raising awareness about the future of autonomous weapons. In the video, the presenter appeared on stage giving a presentation in a tone similar to those given by Silicon Valley CEOs when unveiling a new product or service. The main difference is that the product he presented was a mini drone with a small explosive charge that could be programmed to identify its target through facial recognition and fly straight toward it, detonating the charge upon impact. The video is shocking, and that’s why I think you should see it, because if we analyze it well, we already have all the technology needed for someone to carry out an attack of such characteristics. A drone can be bought or homemade, adding a camera and facial recognition software is just as easy. You just need to get some explosive to add to its body, and voila, you could program it to detonate upon impact, with a timer, or execute the command remotely. Remember that today even miniature drone models are sold for children to play with, making access to this technology extremely easy.
Killer Drones – Future of Life Institute
The use of AI in armed conflicts has been worrying the scientific community for a long time, which is why several scientists and renowned individuals have signed a public letter calling for the prohibition of using this technology for military purposes[63]. Among them are António Guterres (UN Secretary-General), Francesca Rossi (Professor of Computer Science at Harvard University, former President of the International Joint Conference on Artificial Intelligence, and co-chair of the AI Impact and Ethical Issues Committee at the American Association for Artificial Intelligence), Stephen Hawking (Nobel Prize in Physics), Elon Musk (CEO of Tesla, SpaceX, Twitter, and co-founder of PayPal), Barbara J. Grosz (former President of the American Association for Artificial Intelligence), Steve Wozniak (co-founder of Apple), Jody Williams (1997 Nobel Peace Prize for her work to ban the use of landmines and cluster bombs), Noam Chomsky (Professor at the Massachusetts Institute of Technology), Lisa Randall (Professor of Physics at Harvard University), and Demis Hassabis (CEO of DeepMind). The world’s political leaders must call on the brightest and most capable minds in this field to regulate this technology and maintain oversight to control its use in wars and other scenarios, no matter how difficult it may be.
Let’s stop idealizing what Hollywood presents to us. No machine will suddenly become “evil”. As always, the data we feed into our Artificial Intelligence and its usage instructions will determine its actions. If the input data is biased, our AI will undoubtedly acquire that bias and probably amplify its negative effect on society, as the program will try to improve its results. That’s why we must insist on the importance of forming multidisciplinary teams to filter the data with which we will feed our algorithms. The same will be necessary when formulating the task to be solved, as if it is not defined with sufficiently clear parameters, our AI might take unknown paths to achieve its goal. Thinking of autonomous weapons capable of deciding who lives and who dies without external human intervention may be one of the greatest moral thresholds we must face, and the time to do so is now, as countries like the United States, China, Israel, and Russia, among many others, are already developing autonomous weapon systems[64]. This topic can no longer remain in the shadows.
If this technology cannot understand the intent of simple lines of text, how can we accept that it discriminates between a civilian and a soldier in a complex war scenario? That is the case today, but probably in the future, it will succeed, or have a very high success rate, and policy will need to advance in line with that technology.
The dehumanization of war could become one of the greatest injustices in history. Replacing human troops with robots will make it easier for a powerful country to decide to go to war. The use of autonomous weapons that know nothing of compassion will be a grave threat to civilians, whose deaths are often disguised under the label of collateral damage. Artificial Intelligence should never, I repeat, never, violate human rights. Let us then remember what happened on September 26, 1983, during the Cold War. That day, Stanislav Petrov became the hero of humanity. That day, while monitoring the Soviet Union’s Air Defense system, he detected that the early warning radar indicated that the United States had launched 5 intercontinental missiles towards the USSR[65]. Petrov deduced that it was a mistake and decided to wait for more evidence instead of notifying his superiors, who would likely have ordered a retaliatory strike against the United States and its NATO allies if he had done so. Imagine now what would have happened if, instead of Petrov, an AI had detected this. It would have had to respond immediately. However, it was a false alarm. Years earlier, in 1980, the Pentagon’s National Military Command Center mistakenly detected that 220 Soviet missiles were heading towards the United States[66]. The cause? A faulty 46-cent chip.
[59] Department of Defense. (2012). Directive N° 3000.09. Autonomy in Weapon Systems [Ebook]. Retrieved May 18, 2021, from https://www.esd.whs.mil/portals/54/documents/dd/issuances/dodd/300009p.pdf.
[60] Wintour, P. (2020). Iran says AI and “satellite-controlled” gun used to kill nuclear scientist. The Guardian. Retrieved May 23, 2021, from https://www.theguardian.com/world/2020/dec/07/mohsen-fakhrizadeh-iran-says-ai-and-satellite-controlled-gun-used-to-kill-nuclear-scientist.
[61] Zetter, K. (2014). An Unprecedented Look at Stuxnet, the World’s First Digital Weapon. WIRED. Retrieved March 15, 2021, from https://www.wired.com/2014/11/countdown-to-zero-day-stuxnet.
[62] Hambling, D. (2022). “If Human, Kill”: Video Warns Of Need For Legal Controls On Killer Robots. Forbes. Retrieved December 27, 2022, from https://www.forbes.com/sites/davidhambling/2021/12/03/new-slaughterbots-video-warns-of-need-for-legal-controls-on-killer-robots/?sh=6f40e0b37238.
[63] Autonomous Weapons an open letter by AI & Robotics Researchers. Future of Life Institute. (2015). Retrieved May 22, 2021, from https://futureoflife.org/open-letter-autonomous-weapons/?cn-reloaded=1.
[64] Wareham, M. (2020). Stopping Killer Robots. Human Rights Watch. Retrieved August 15, 2022, from https://www.hrw.org/report/2020/08/10/stopping-killer-robots/country-positions-banning-fully-autonomous-weapons-and.
[65] Stanislav Petrov, Soviet Officer Who Helped Avert Nuclear War, Is Dead at 77 (2017). The New York Times. Retrieved May 18, 2023, from https://www.nytimes.com/2017/09/18/world/europe/stanislav-petrov-nuclear-war-dead.html.
[66] Schlosser, E. (2016). World War Three, by Mistake. The New Yorker. Retrieved May 18, 2023, from https://www.newyorker.com/news/news-desk/world-war-three-by-mistake.