Campaigners are calling for laws which are similar to Isaac Asimov’s first law of robotics to prohibit the use of robots which can kill without a human control switch.
Researchers think that automatic killing machines should be ready for combat within the next 20 years if a doctor with a sonic screwdriver does not stop them. Nobel Laureate Jody Williams, of the “Campaign To Stop Killer Robots”, said that building real Doctor Who villains would breach a moral and ethical boundary that should never be crossed.
Williams, who won the 1997 peace prize for her work on banning landmines, said that if war is reduced to weapons attacking without human beings in control, it is going to be civilians who are going to bear the brunt of warfare.
Noel Sharkey, professor of Artificial Intelligence and Robotics at the University of Sheffield said that robots already have a certain amount of autonomy and if someone asked him to build an autonomous killer robot today, he could knock one up in a few days.
However, he said that the technology is a long way off being able to distinguish between a soldier and a civilian and the idea of a robot being asked to exercise human judgment is silly, he added.
According to Yahoo, the British government has always said it has no intention of developing such technology. But then again Britain has not had the money for such a product since the Blue Streak was cancelled.
The Royal Navy has a defensive system called Phalanx, which can be used in an automatic mode to protect personnel and ships from enemy threats like missiles but a human operator oversees the entire engagement.
But the organisers of the Campaign to Stop Killer Robots are worried that Britain’s rejection of fully autonomous weapons is not yet watertight. Particularly as the United States, China, Russia, Israel and Germany are expected to move towards systems that will soon give full combat autonomy to machines.