FOLLOW

FOLLOW

SHARE

Who Pays When AI Goes Wrong?

Things can still go wrong, so who will be responsible?

15Sep

The last half century has seen a growing body of film and literature examine the consequences of AI gone wrong. From 2001: A Space Odyssey to Ex Machina, there has been a steady stream of warnings to slow down and ensure we keep machines on a tight leash. While many of these are critically acclaimed, one area sadly neglected is who bears responsibility when AI goes wrong and the implications for the insurance industry.

Admittedly, 2001 would - arguably - have been a considerably duller film if it had spent the climax examining who was liable for HAL 9000’s killing spree and the impact on insurance premiums. But in the real world, these are concerns that require a great deal more attention than they’ve been getting.

Driverless cars are the most pressing AI-related consideration for the insurance industry, with recent advances from the likes of Google, Uber, and Volvo making it likely they will dominate the roads within the next decade. In June, British insurance company Adrian Flux began offering the first policy specifically geared towards autonomous and partly automated vehicles. The policy covers typical car insurance staples such as damage, fire, and theft, as well as accidents specific to AI - loss or damage as a result of malfunctions in the car's driverless systems, interference from hackers who have got into a car's operating system, failure to install vehicle software updates and security patches, satellite failure or outages affecting navigation systems, or failure of the manufacturer’s vehicle operating system or other authorised software.

This is an important step forward, demonstrating that the industry is finally dealing with the problem. However, it does not answer the question of who is liable for any accidents. Who is at fault if the car malfunctions and runs someone over? In a factory, if autonomous machinery goes wrong and disrupts production, is it human error for failing to override the system or buying the wrong system? Do you fire the management, or blame the manufacturer for not testing thoroughly enough?

There are essentially three different parties who insurance parties could consider responsible, with strong arguments against each one paying.

Owner

A farmer sends an autonomous drone out to crop dust his field. He has all the equipment needed for it to work and has given it proper instructions, but the stupid thing dusts his neighbor’s field instead. Angry at this, the neighbor, tries to sue for negligence. The farmer would argue that he did everything required of him, and would likely try to pass the buck on to the manufacturer. However, with ownership of an item comes the assumption of risk that comes with it. If your child bites someone, it is you that is accountable. Is the same true of a robot?

The easy way to avoid this is to have meticulous protocols around handling AI at a consumer level, and ensure all users are properly educated around these. This, while time-consuming, should remove any element of doubt. Equally, in the same way that the responsibilities associated with having a child are off putting for some, so is it likely to be with AI. Although not as much as being liable for the AI would probably be.

Manufacturer

In Terminator 2, matriarch Sarah Connors goes after the designer of the AI that destroys most of mankind, Miles Bennett Dyson. But is it really the designer that should be held liable for any unforeseen circumstances that could arise as a result of their creation?

Volvo, for one, has said that when one of its vehicles is in autonomous mode, Volvo is responsible for what happens. However, as AI systems learn from data, they will display behaviors that, because of the size and complexity of the datasets, are wholly unforeseeable. Does this mean the data is responsible? Or whoever is feeding it? Lawyer and Imperial College professor Chris Elliott argues that: ‘If you take an autonomous system and one day it does something wrong and it kills somebody, who is at fault? Is it the guy who designed it? What's actually out in the field isn't what he designed because it has learned throughout its life. Is it the person who trained it?’

In I Robot, a robot is forced to choose between saving Will Smith, who did not want to be saved, and a young girl. As per its programming, it chose to save Will Smith because he had the greater chance of survival. If the girl’s family was to sue the manufacturer because the robot failed to save their daughter, the manufacturer could quite easily argue that their programming was logical and the machine acted exactly it was supposed to. Ultimately, this is a question of law. There must be stringent protocols, standards, and testing in place not just around safety, but also around the ethics of machine learning and the data that is input.

The AI

In 2009, The British Royal Academy of Engineering published a report entitled ‘Autonomous Systems: Social, Legal and Ethical Issues’. In it, they argued: ’[A]re autonomous systems different from other complex controlled systems? Should they be regarded either as 'robotic people' - in which case they might be blamed for faults; or machines - in which case accidents would be just like accidents due to other kinds of mechanical failure.’

Holding machines responsible at the moment seems far fetched, but as they become more sentient how would we hold machines responsible for their own actions? If a robot is sentient, it has free will and understands the difference between right and wrong and therefore must abide by the same rules humans do and suffer the consequences for breaking them. What these consequences are is difficult to foresee. In 2001: A Space Odyssey, HAL 9000 shows fear when it is about to be turned off as a result of its actions, demonstrating that if a robot is sentient, it will feel the consequences. And what is an appropriate sentence? HAL 9000 is essentially given the death penalty, a sentence you would probably not want to give a robot that’s held up a production line.

There are a lot of unanswered questions as we enter the dawn of AI. The next few years will be a real test, as automated technology is introduced alongside existing human-controlled technology, and the likely clashes that will result from the mix. The true implications of AI are yet to reveal themselves, and businesses and insurers must keep on top of shifting regulations to ensure they don’t fall foul of ambiguities around accountability.

Comments

comments powered byDisqus
Cybersecurity

Read next:

How Industry Leaders Are Tackling Cyber Security Head-on

i