Asimov's Fourth Law: Call a lawyer

Posted

London, England - The Royal Academy of Engineering has published a report on the social, legal and ethical issues surrounding the development and use of autonomous systems. While these technologies can offer great benefits, the Academy raises the question: If something goes wrong, who is to blame? The machine itself, its designer or its maker?

Autonomous systems are likely to emerge in a number of areas over the coming decades. From unmanned vehicles and robots on the battlefield, to autonomous robotic surgery devices, the applications for technologies that can operate without human control, learn as they function and make decisions, are growing.

According to the report, while replacing humans in tasks that are mundane, dangerous and dirty, or detailed and precise, such as defusing bombs to monitoring the ill or housebound, the use of autonomous machines raises a number of social, legal and ethical issues.

Who is to blame - machine or man?

The report focuses on two emerging areas of technology - transport, in terms of autonomous road vehicles, and personal care and support in the form of artificial companions and smart homes.

Should they be regarded as robotic people, in which case they might be blamed for faults, or as machines in which case accidents would be just like accidents due to other kinds of mechanical failure?

Road accidents, even fatal ones, currently attract only cursory investigation compared with air or rail accidents, due to the lack of detailed 'black box' data recording technology. But as data recording technology improves and costs go down will all accidents be carefully analysed? According to the report, this raises legal and privacy issues: what would happen if most road accidents, currently insurance-classified as 'accidental', could reliably point the finger at the guilty party?

The report continues: "All technologies are liable to failure, and autonomous systems will be no exception (which is pertinent to the issue of whether autonomous systems should ever be created without manual override).

"Dealing with the outcomes of such failures will raise legal issues. If a person is killed by an autonomous system - say, an unmanned vehicle - who is responsible for that death? Does the law require that someone be held responsible?

The Academy points out that the law currently distinguishes between human operators and technical systems and requires a human agent to be responsible for an automated or autonomous system.

But technologies which are used to extend human capabilities or compensate for cognitive or motor impairment may give rise to hybrid agents - humans with autonomous prostheses which support their physical or cognitive functioning.

Without a legal framework for autonomous technologies, there is a risk that such essentially human agents could not be held legally responsible for their actions - so who should be responsible?

The report lists a number of questions that need to be asked before autonomous systems become mainstream, but does not supply the answers:

  • What are the consequences of giving up human choice and independence?
  • When is an autonomous system good enough?
  • Must there be higher standards for autonomous systems than those with human operators? How much higher?
  • Can the law keep up with technology? How can the law be changed to accommodate the range of autonomous systems?
  • What are the criteria for assigning responsibility, or degree of responsibility, for the failure of an autonomous system and the harm it may cause?
  • Who will be responsible for certification of autonomous systems?
  • How will the insurance industry deal with responsibility for failures and accidents involving autonomous vehicles?

Check out the report here.