Artificial Intelligence has come a long way from science fiction to providing innovative, and life-changing solutions in the real world. AI is a form of intelligence exhibited by machines, where machine learning methods teach machines how to perform tasks that humans either can't do or at which machines are more efficient and productive. AI never stands still and it has become a subject of ethical debates, including how to exploit AI but not harm the humanity but also how to treat an artificial intellect in terms of rights and freedoms?
Machines we see today are already capable of performing full-time both industrial and non-industrial jobs, they can speak, learn, and even have sexual relationships with humans. These factors lead to questions of whether the time has come to institutionalize robots and give them rights and freedoms, because at the moment, the creation of conscious entity is not prevented by laws or physics. The real worry, however, is not introducing boundaries in time for when AI will learn self-awareness.
It may be too late now, though. In 2015, Professor Selmer Bringsjord, from the New York's Rensselaer Polytechnic University performed an experiment with three robots to check their self-awareness. 'The King's Wise Men' logic puzzle was used to perform the experiment, where three wise advisors are presented to a king. Each advisor is wearing a hat, with a color unseen to the wearer. The king then tells them three facts to help identify the color – the first one to correctly deduce the color wins. The contest is only fair If all three men wear the same color hat, therefore, the winner is the one who understands that the hat color of his rivals is the same. By noting this, he can correctly identify the color of his own hat and win.
With robots, instead of choosing the color of hat, they were programmed to believe that two of them had been given a 'dumbing pill', which would make them mute. In the end, Bringsjord asked who had been given a pill - two robots remained silent, the third one, however, responded with 'I don't know.' When the robot understood it had heard its voice, it realized that it wasn't the one who received the pill.
Considering that it's humans who build robots, it's logical that, firstly, regulations need to set boundaries of how far they can go with AI projects, so there is no threat to society. South Korea - the country where robots are now an inseparable part of a daily life, has come up with a draft of a Robot Ethics Charter and the enhancement of the intelligent Robot Development and Supply Promotion Act (IRDSPA).
Suggestions outlined rules for the adequate robot exploitation and control over developments in AI. In particular, 'robots must be designed so their actions are traceable at all times, as well as it's a minor offence to treat a robot in a way which may be constructed as deliberately and inordinately abusive. However, the plan also covers robots so they are responsible - 'a robot must not deceive a human being,' but at the same time it has a right to exist without fear of injury or death, and to live an existence free from systematic abuse. So far, other countries haven't come up with the written rights and freedoms for robots, let alone put legislation in place.
In 2014, The European Union introduced its RoboLaw, a $1.9 million funded project to prepare relevant guidance of legal and ethical aspects for robotics. The RoboLaw has come to light with the help of engineering, philosophy, law, technology and human enhancement experts, and topics covering soft and tough regulations on privacy issues, data protection, and ethical aspects. The European Parliament is also talking about AI replacing jobs and ways of protecting society from unemployment. Mady Delvaux, the EU Parliament's Rapporteur on Civil Law Rules on Robotics, however, found a positive side of AI taking jobs, in her opinion piece she claims: 'If industry uses more automation robotics, it will become more efficient and competitive, allowing companies to relocate their production back to Europe. Of course, this will eliminate certain kinds of jobs, but it will also create new ones.'
A lot of fear has been expressed over AI stealing jobs or having an increasing impact on people's lives. However, it's not so much the robots that need control, but people who build and exploit such technology. With the right strategic vision, AI can reboot productivity and growth in many sectors, as well as visibly benefit quality of life. The legislation, indeed, is needed, but it's certainly too early to grant AI with civil rights.