FOLLOW

FOLLOW

SHARE

AI And The Future Of War

Autonomous weaponry raises more ethical concerns than practical answers

20May

AI is undeniably the most significant technology being developed today, and its applications in society are many and profound. The majority of its current uses are beneficial, as are its likely future applications. However, as with any new technology, there are also many ways that it is, and will be, used for what would be roundly considered ‘bad’. Most worryingly, in warfare.

War has always driven technological innovation. There is a certain extra motivation to innovate when you’re staring down the barrel of an AK47 that you rarely find elsewhere. It has always been necessary to exploit every available tool to win wars, and invent those that don’t exist. That the military should look to AI as a potential weapon is wholly predictable, if a bit depressing to those who hoped that we were growing out of war as a species, rather than simply getting better at it.

The implications of handing over warfare to machines is not only depressing though, it’s deeply concerning from both a moral and practical standpoint, and raises the hoary old trope of killer robots again. Some of the world’s greatest minds have come out against the idea, with Stephen Hawking, Elon Musk among over 1,000 academics signing a petition last year warning that a global robotic arms race ‘is virtually inevitable’ unless a ban is imposed on autonomous weapons. The consequences of such a race are difficult to predict, but unlikely to end in all sides sitting round drinking craft ale and watching reruns of the Gilmore Girls.

And anyone who think it’s just intellectual peaceniks who have a problem with using the technology in warfare would be way off the mark. A report released earlier this year warned that such weapons could be uncontrollable in real-world environments, where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries. It also argued that they will inevitably lack the flexibility humans have to adapt to novel circumstances and that as a result killing machines will make mistakes that humans would presumably avoid. The report was written by Paul Scharre, who directs a program on the future of warfare at the Center for a New American Security.

From a military standpoint, there are some obvious advantages. Robots are far less prone to error, fatigue, or emotion than human combatants, and AI should, theoretically, be able to make quicker and better decisions on the battlefield. There is also a clear benefit in that it should, theoretically, limit loss of life. Equally, however, this is unlikely to remain the case for long. A war of attrition between robots would simply continue until one side can no longer pay for the robots, and it’s inevitable that a way will be found to use them to kill humans, and this is likely to be on a far greater scale than normal weapons can manage. There is also the danger that war will seem far more appealing as a consequence of a perceived reduction in casualties, and countries will be far more willing to engage in conflict over any slight.

There are arguments on both sides, but the debate seems to have been further confused by a lack of understanding as to the difference between wholly autonomous weaponry and simply robotic weaponry, such as weaponized drones. Gartner defines AI as a ‘technology that appears to emulate human performance typically by learning, coming to its own conclusions, appearing to understand complex content, engaging in natural dialogs with people, enhancing human cognitive performance or replacing people on execution of non-routine tasks.’ Lethal autonomous weapons are defined as those that can ‘select and engage’ targets without the intervention of a human operator. Unlike ‘drones’ that are piloted by human beings, and where human operators identify and authorize target engagement, autonomous weapons would be capable of finding, selecting, and engaging targets without human oversight. Robotic weapons, on the other hand, still keep humans ‘in the loop’ in the process of selecting and engaging targets.

Some have argued that fully autonomous weapons are okay so long as there’s a requirement to maintain human control over the use of weapons, which really negates the tag of ‘fully autonomous weapons’. Others suggest that AI could be programmed to incorporate ethics and rules of engagement, which is a nice idea in theory but extremely difficult in practice. Machine learning algorithms cannot program a robot both to win and to behave ethically.

This debate is, sadly, likely to be irrelevant. Regardless of whether or not they’re a good idea, people are going to start using them, and those they’re fighting will also be obliged to do so or perish. We can limit their use as much as possible to try and keep their impact to a minimum, but they are the future of warfare whether we like it or not. The arms race will never stop, because the finish line is total annihilation. There is only the hope that all sides get puffed out before that happens, but with money and hate as its fuel, this is unlikely. Bans may have worked with things like biological weapons and space-based nuclear weapons, but the arguments around their use is far less nuanced. An army of cheap, expendable, fearless, precise, diminutive, and astoundingly numerous ground combat robots, would be similarly devastating to any enemy, but it’s a far less toxic proposition for the general public. The only hope is that we can evolve out of warfare before AI evolves into it. 

Comments

comments powered byDisqus
Compgames

Read next:

Speaker Snapshot: Alex Tarrand, Senior Manager Of Live Operations at Mobilityware Games

i