Should robots have human rights?

The European Commission has been deliberating over a controversial move to give artificial intelligence and robots their own ‘human rights.’ This debate comes after the Commission has called for a 20-billion-euro investment into artificial intelligence, hoping that it will help improve lives of European citizens and solving issues in healthcare, sustainability, climate change and cyber security.

However, experts from around the world have been debating the role of AI and human rights, including Mark Zuckerberg and Elon Musk, acknowledging that the relationship between man and machine is getting closer. Following the granting of human rights to a robot in Saudi Arabia called Sophia, the debate is becoming very topical.

Why robots should human rights

The role of artificial intelligence and machine learning refers to how systems and machines will continue to learn after they have been programmed, taking in data, trends and analysis to produce desired outcomes. This is already evident in consumer products such as Alexa and Amazon Echo and business products such as Google’s search algorithm and the marketing software from Phrasee.

The reason that robots have become a human rights issue is because of the impact it makes on human lives. Firstly, there is a huge debate over the potential of robots to take over human jobs through the use of automation – and whether humans should be compensated as a result.

There is the case that robots are become more human like, with faces and they respond to commands. Taking the example of Alexa, that greets you in the morning and welcomes you when you come home from work and plays your favourite music on command, there is a two-way relationship between the human and the robot. The argument made by North eastern professor Woodrow Hartzog is that in terms of forming relationships with robots, “we are heading that way.”

In fact, Sophia, a robot modelled on Audrey Hepburn has been given human rights in Saudi Arabia. It is programmed to respond to voice commands, as though it were a regular person.

Elsewhere, there are moral issues with the automation of using robots to care for the elderly and whether facial recognition software is discriminating towards different ethnic groups and minorities. (Oxford Human Rights Hub)

The role of human rights means that robots should be held accountable for any damages they cause on humans. This is especially influenced by movies and also the coining of the term ‘killer robots.’

And visa-versa. Whilst it is bad for a robot to stab a human, if a human stabbed a robot and the robot was using its intelligence to aid humanity, the human would be held accountable. In this light, it can be argued that for robots that aid humanity, they could be given human rights.

Why robots should not have human rights

In simple terms, it is crazy to think that a machine or vacuum should be treated in the same way as a human being, especially since they do not have free will and cannot experience love or pain, even if something like Alexa does start to learn your routine and favourite songs. “Robots will never become humans,” explains Elżbieta Bieńkowska, the European commissioner for industry.

Equally, the concept of taking legal action against an individual machine is preposterous. What are we going to do, throw a robot in jail until he has been rehabilitated? This is impractical. But certainly, having a legal framework for punishment if a robot does cause damage to a human is worthy of conversation and something that has been overlooked until now and has only been the thing of movies.

The ethical dilemmas that robots raise with assisting, potentially harmful and replacing humans is noteworthy of discussion. In the future, the proposals for a human rights framework to support ‘ethical AI’ may become increasingly attractive.

For this to happen, stronger relationships would have to be formed between the fields of technology and human rights – which until this debate have often been disconnected. A corporation that create and design AI need to be aware of their potential power and harm and this idea is growing, with new phrasing such as ‘corporate personhood.’

It is likely that we will see an increase in regulation for things like robots and AI and it may take a serious example of a robot harming a human being to trigger this. Still, this debate shows a real sign of things to come.