Drones, Robots, and Driverless Vehicles
Liability Law for the Next Century and Beyond
What happens when a piece of hardware that is being controlled by a piece of software causes injury to flesh and blood? The legal premise of personal injury liability claims is that every person has a legal duty to avoid injuring anyone else. When a person fails to use “reasonable care,” for example by driving a car carelessly, he or she is deemed to have acted “negligently.” If the person’s negligent act causes injury to someone else, the person who acted negligently is legally liable for the damage caused by his or her negligence. Our civil justice system is designed to give people an incentive to avoid injuring others and to divert the cost of negligent behavior from those who are injured to those whose actions caused the injury. Despite some imbalances and abuses, that system has served us well. Our sense of natural justice is satisfied by awarding innocent victims enough money to compensate them for their losses, and it is only fair to take that money from the people whose human failings caused the losses.
During the first two hundred forty years of our nation’s existence, there was little reason to contemplate legal liability in a scenario devoid of human acts. If an injury was caused by an event in which no person was involved, the injury was considered “unavoidable” or an “act of God” and no recovery for negligence could be had. However, (with apologies to Bob Dylan), the times they are a-changin’. Artificial intelligence is displacing direct human control in many fields, including aircraft, personal services, and motor vehicles. Our skies are increasingly spotted with pilotless aircraft – some of which are not under the direct control of anyone other than a software program. Robots with artificial intelligence were once found only in science fiction, but are now being mass-produced. Cars and trucks that navigate our streets and highways without human intervention are being tested and will inevitably become commonplace. What happens when the “negligent” actor is not a person, but is instead a device operating under the control of artificial intelligence? Will we categorically deny any recovery to the injured person – even when he or she is the proverbial innocent victim? Will we hold owners of intelligent devices strictly liable? Or, will we develop a body of law that satisfies our desire for natural justice and promotes the fair allocation of losses? The answer has yet to be determined, but is critically important to claim managers and defenders of liability claims.
- The product liability model
One approach is to apply the law of product liability to artificially intelligent devices. Under that body of law, the designer, manufacturer, maintainer, or programmer is held liable for acting negligently when that negligence indirectly causes a product to injure another person. The field of product liability law did not exist before the industrial revolution of the 1800s. Since then, it has developed into a sophisticated body of common law, statutes, and regulations that endeavor to identify a defect in the object or device that caused the injury, and then to trace the source of that defect to a person who can then be held negligent and, therefore, liable. In order to recover, the injured person must establish a direct causative nexus between the negligent act of a person and the injury. If there is no other explanation for the injury, the doctrine of res ipsa loquitur creates a presumption of negligence on the part of the person who owns, or controls, the product. Product liability law has expanded and defined the concept of “causation” so broadly that it cannot be stretched further without losing all meaning. In order to find a causative link between an injury caused by an autonomous device and the owner of that device, it would be necessary to prove that the person acted negligently, and that no superseding cause intervened. If an artificially intelligent device is working properly, but simply makes a poor decision that its owner could not have foreseen, the causative link has been broken and the owner cannot be liable under a product liability theory. That would leave the injured person without a legal remedy. That problem was first noted in a 1997 case in which the manufacturer of an industrial robot was sued after a worker was killed by the robot. The court dismissed the products liability action “because plaintiff failed to introduce evidence which created a genuine issue of material fact concerning its claims of negligence and defective design.” Payne v. ABB Flexible Automation, No. 96-2248, 1997 U.S. App. LEXIS 13571, at *5 (8th Cir. 1997). A product liability case against the manufacturer of a surgical robot was dismissed in 2009 because there was no evidence “that the da Vinci robot had a defect under a strict products liability theory.” Mracek v. Bryn Mawr Hospital, 610 F. Supp. 2d 401, 406 (E.D. Pa. 2009). A significant overhaul of product liability law would be required to adapt it to artificially intelligent devices that move around in the real world.
2. The animal model
Another approach to liability for artificially intelligent moving devices is to apply the legal principles for damage caused by an animal. Historically, owners of non-domesticated animals have been held strictly liable for any harm caused by those animals, regardless of their level of control over the animal. However, a different standard applied to owners of domesticated animals, such as dogs and livestock. As noted by a New York court two centuries ago: “If damage be done by any domestic animal, kept for use or convenience, the owner is not liable to action on the ground of negligence, without proof that he knew that the animal was accustomed to do mischief.” Vrooman v. Lawyer, 13 Johns. 339, 339 (N.Y. Sup. Ct. 1816). Statues in some jurisdictions impose strict liability upon the owner of any animal that causes damage, regardless of knowledge, intent, control, or foreseeability. Under the theory of strict liability, no negligence is required – if an animal causes damage, its owner is liable. The public policy rationale for the rule was simple – the duty to prevent damage to others should be placed on the person who had the most control over the animal. The same rationale can be applied to drones, robots, and driverless vehicles. The person who owns an artificially intelligent device is arguably in the best position to decide whether it is reasonably safe to hit the “On” button, and should therefore bear the risk of anything that happens after that button has been pushed. (Which begs the question of liability for a trip-and-fall caused by a housecat riding a Roomba vacuum).
3. The vicarious liability model
Finally, another approach is to adapt the theory of vicarious liability, such as master/servant, principal/agent, and employer/employee, to create a comparable legal relationship between a human and an artificially intelligent device. Under that theory, if a device causes injury while acting within the scope of its “employment,” the person who “employs” the device (i.e. set it in motion) would be considered legally liable for the device’s negligence, regardless of the person’s actual negligence. If an injured plaintiff can establish that a reasonably prudent, artificially intelligent device would not have caused the injury in the same circumstances, the “master” of the device would bear the burden of the claimant’s loss. The problem with such an approach is that the law has never recognized anything other than a human being to have a legal duty. Doing so would require an unprecedented leap in the development of our jurisprudence. In addition, identifying the “master” may prove difficult, if the past century of litigating the issue among humans is any indication. Before we can impose vicarious liability upon the “master” of a device, we must decide whether we are ready to impose liability upon an artificial object for the poor judgment of its artificial intelligence.
The law has always evolved to fit the changing circumstances of our society. Just as eighteenth century law did not account for motor vehicles, our current law does not account for autonomous moving objects. The application of artificial intelligence to moving devices will require a sea change. The courts, aided and guided by those who litigate liability claims, will determine the course of claims involving artificially intelligent moving devices in the coming years. Whether the law takes one of the tacks described above or another, as-yet-uncharted, course, we must remain vigilant as the winds of change sweep through. We must seize any opportunity to guide the development of the law in our field. Of course, whatever legal solutions we develop will last only until the machines assume command and we are no longer in control…. But that’s for a future discussion.