Elon Musk has a lot to say about AI. He is a vocal advocate for a basic universal income to counteract the damage it is likely to do to working people if it takes their jobs, he has pushed for much tighter regulations on AI, and has recently pushed for a UN resolution banning the use of AI in weapons.
This move has also been supported by over 100 other CEOs from across the globe, most of whom would stand to benefit from any related military contracts given their expertise in the area. They published an open letter to the UN commending them for certain moves such as establishing groups to investigate the issue and appointing respected people to positions of power within these groups. However, they also publicized the danger it represents, saying:
‘Lethal autonomous weapons threaten to become the third revolution in warfare. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend. These can be weapons of terror, weapons that despots and terrorists use against innocent populations, and weapons hacked to behave in undesirable ways. We do not have long to act. Once this Pandora’s box is opened, it will be hard to close. We therefore implore the High Contracting Parties to find a way to protect us all from these dangers.’
However, this is an issue with several different elements that need to be considered before we can come to a specific conclusion about whether AI in weapons is a good or bad thing.
No, It Shouldn’t Be Banned
One of the key elements of this is how AI is actively used and programmed within these weapons because in addition to the potential to do huge damage to civilian populations, it also has the potential to significantly reduce it too.
In total there have been two nuclear bombs dropped as an act of war, on August 6 and August 9, 1945. The destruction they wrought saw over 200,000 people die instantly and the rates of leukemia rise 46% afterward. The huge damage inflicted by these weapons has meant that not leader in the past 72 years has used them, despite trillions being spent on their development and upkeep. There is the MAD theory which despite seeming like terrifying brinkmanship, has kept peace in the world. In the Cold War, the US and Russia even had systems that would see their enemies destroyed even if their own country no longer existed, which used early forms of the IoT to take sensor readings to determine whether the country had been destroyed or not. It could be argued that because these weapons could kill millions in the blink of an eye, they have actually made the world a safer place. The same could be said of AI, with potential casualties so high if they were unleashed on a civilian population, countries would be less likely to attack a country armed with the technology.
Much like nuclear weapons, one of the only ways to defend against AI weapons is to have AI weapons yourself. We have seen through the development of nuclear tipped ICBMs in North Korea that countries with the will to create weapons of mass destruction will create them whether the UN gives them permission to or not. If a rogue state or even terrorist organization were to create AI weapons when a country didn’t have the same capabilities, the damage could potentially be huge for the civilian population. The development of AI weapons is something that requires considerably fewer trackable parts compared to other weapons of mass destruction, so it would be considerably easier for these kinds of weapons to be made in secret, with the international community unable to react should they be released.
Yes, It Should Be Banned
One of the arguments in favor of AI weapons is that they can allow for better precision targeting compared to manual weapons systems, but we have seen through countless examples that ‘highly targeted’ attacks are very rarely conducted with the kind of surgical precision initially promised. When we consider the ‘war on terror’ in places like Iraq, Afghanistan, and Syria, the death toll is almost impossible to calculate because identifying enemy combatants is incredibly difficult. These wars were conducted by humans with high-tech weapons, who could make snap judgments. Even then it is likely that hundreds of thousands of innocent people were killed because they looked similar to combatants themselves. The reality is that AI-driven decision making processes would never be able to make these kinds of decisions in quick moving modern warfare, which is often in confusing urban environments where distinguishing between combatants and civilians is increasingly difficult.
AI also requires millions of hours of practice before it can become accurate, as we have seen with automated cars, which have so far driven over 2 million miles but are still not considered to have been tested thoroughly enough to transport people, let alone make life or death decisions. Lab testing and inputting video can only be useful to a certain extent, these systems require field input to become genuinely accurate. But how is it possible to put these weapons in the field without this experience? It creates a genuine catch-22 where they need experience to be in the field, but they can only gain experience from being in the field.
AI is also unpredictable in how it will interpret things or evolve, which is not a good idea in machines with the capability to take human lives. For instance, Facebook recently shut down one of their AI projects when it created its own language, making it impossible for engineers to understand the thought processes behind the decisions being made. If this happened with a system that needs to make split second decisions of whether or not to kill a human being, this is a recipe for disaster. Humans have the capacity for empathy, guilt, and remorse over their actions, meaning that pulling a trigger or dropping a bomb is a last ditch resort. These kinds of decisions need to be made with these weights attached to them, which they cannot be if they are made by an unfeeling algorithm.