FOLLOW

FOLLOW

SHARE

Robots Don’t Need Rights, They Need Limits

Robots need to be seen as tools, not as people

18Jan

As AI in the real world begins to catch up with its representation in books and movies, you could be forgiven for thinking there is an army of hyper intelligent robots around the corner, ready and waiting to take your job and steal your partner. While robots have yet to develop human emotions, they already appear, to my mind at least, pretty smug.

The EU could, however, be about to wipe this imagined smirk off their metal faces. Recent advances in machine learning and deep learning algorithms - the technology behind AI - have raised a number of moral and ethical questions around their use. For example, who bears the burden of responsibility when they go wrong? Is it the engineer, the owner, or the retailer? The European parliament’s legal affairs committee has this week taken the first step to answering some of these questions, voting 17-2 in favour of a proposals for a framework governing the use and creation of robots and AI. As a result of the vote, the European Commission has been invited to present a legislative proposal, while the European Parliament will vote on draft proposals in February.

The report’s author, Luxembourgian MEP Mady Delvaux, noted that: ‘A growing number of areas of our daily lives are increasingly affected by robotics. In order to address this reality and to ensure that robots are and will remain in the service of humans, we urgently need to create a robust European legal framework.’ The report covers a range of areas, including rules forcing AI developers to imbue their creations with restrictions preventing them from harming a human or allowing a human to come to harm through their inaction, a means for robots to always be identifiable to humans, and for robots to be taxed should they become members of the economy. Perhaps the most important provision, however, is a proposal for a form of ‘electronic personhood’ that provides the most advanced AI with limited rights and responsibilities. According to the report, ‘the most sophisticated autonomous robots could be established as having the status of electronic persons with specific rights and obligations, including that of making good any damage they may cause.’

‘It is similar to what we now have for companies,’ Delvaux noted. The idea of corporate personhood has existed for several decades, giving firms the right to take part in legal cases both as the plaintiff and respondent, own property, and a limited amount of free speech. They are limited in that they cannot vote, run for office, or bear arms. The comparison with corporate personhood is, unfortunately, not particularly unfavorable. When it comes to promoting commercial interests, however, various parties have proven themselves highly adept at exploiting the notion of corporate personhood for their own gain, and there is no reason to think that they shouldn’t take advantage of electronic personhood in the same way. The idea of electronic personhood has disturbing implications and may have consequences that we cannot yet fully predict. Ashley Morgan, of international legal practice Osborne Clarke, for one, argues that the concept of ‘electronic personhood’ is legally complex, noting, ‘If I create a robot, and that robot creates something that could be patented, should I own that patent or should the robot? If I sell the robot, should the intellectual property it has developed go with it? These are not easy questions to answer, and that goes right to the heart of this debate.’

More than this, it presents many problems further down the road. Ultimately, it does not answer a fundamental question that really needs to be answered before AI grows intelligent beyond our control: What function do we see robots fulfilling in society? This will, admittedly, be difficult to understand until we know more about where AI is going, but if we allow robots to take a more human form, there are likely to be a number of issues with what electronic personhood means and how it is enforced.

It all comes down to whether we decide to give them emotions. There are many who argue that when the inevitable happens and machine intelligence does outstrip ours, if robots are programmed to understand and replicate - in essence ‘feel’ - empathy, then we are far more likely to avoid human extinction. Researchers have already developed a test to gauge whether a computer can demonstrate empathy, known as the Lovelace Test, which asks for a computer to create something, such as a story or poem. Futurist Ray Kurzweil, a leading AI scientist, said in an interview with Wired that once a machine understands that kind of complex natural language, it becomes, in effect, conscious. He believes this moment will come around 2029, when machines will have full ‘emotional intelligence, being funny, getting the joke, being sexy, being loving, understanding human emotion. That's actually the most complex thing we do. That is what separates computers and humans today.’

Giving AI emotions is highly complicated for a number of reasons. The obvious emotions you would like robots to have would be empathy and compassion, but can these exist in something without understanding, and therefore capability to ‘feel’, their counterparts? Can you cherry pick emotion or do you need to have the full spectrum? And who defines exactly what empathy is? Furthermore, assuming you create a sentient being, do you need to expand their rights further? If you are forcing a capable robot the right to pay taxes, why should they not be granted the rights of human citizens and have a say in the running of society.

When the UK Department of Trade and Industry theorized earlier this decade that by the middle of this century machines could be demanding the same rights as humans and nations would have to provide benefits including energy, housing, and even ‘robo-healthcare’ to robots, they were scorned by experts, who argued that there are more pressing concerns, such as the safety and legal liabilities of robots, and the increasing robotization of the military. But these are all questions that have the same central question of what exactly robots are. Attempts to enact Asimov’s law to prevent them from harming humans may stop them from starting a revolution that kills humans, but are we also denying them the right to strike if they begin to perceive rights that are then denied to them? And just because they can’t kill us, does this mean that we should treat them however we wish? If we have given them emotions, to deny them human treatment would be to deny our own humanity. Anyone who saw the first series of Westworld will be familiar with the question of how humans should treat AI, and the bloody consequences of treating them poorly.

These are no longer questions for tomorrow, they are questions for today. The human instinct to anthropomorphise everything and anything means that pro robot activism will probably exist before they are anywhere near replicating human emotions. The answer must ultimately be that we do everything in our power to avoid giving robots consciousness and ensure that they remain machines. The question must always be not what we can do for robots, but what robots can do for us. Many of the provisions the European parliament set out are sensible, but the idea of electronic personhood is a step down a dangerous path in which they are no longer seen as tools. 

Comments

comments powered byDisqus
Shadowit

Read next:

4 Tips For CIOs To Deal Efficiently With Shadow IT

i