Who is responsible when artificial intelligence harms someone? A California jury may soon have to decide. In December 2019 a person driving a Tesla with an AI navigation system killed two people in an accident. The driver faces up to 12 years in prison. Several federal agencies are investigating Tesla crashes, and the U.S. Department of Justice has opened a criminal probe into how Tesla markets its self-driving system. And California's Motor Vehicles Department is examining its use of AI-guided driving features.
Our current liability system—used to determine responsibility and payment for injuries—is unprepared for AI. Liability rules were designed for a time when humans caused most injuries. But with AI, errors may occur without any direct human input. The liability system needs to adjust accordingly. Bad liability policy won't just stifle AI innovation. It will also harm patients and consumers.
The time to think about liability is now—as AI becomes ubiquitous but remains underregulated. AI-based systems have already contributed to injuries. In 2019 an AI algorithm misidentified a suspect in an aggravated assault, leading to a mistaken arrest. In 2020, during the height of the COVID pandemic, an AI-based mental health chatbot encouraged a simulated suicidal patient to take her own life.
Getting the liability landscape right is essential to unlocking AI's potential. Uncertain rules and the prospect of costly litigation will discourage the investment, development and adoption of AI in industries ranging from health care to autonomous vehicles.
Currently liability inquiries usually start—and stop—with the person who uses the algorithm. Granted, if someone misuses an AI system or ignores its warnings, that person should be liable. But AI errors are often not the fault of the user. Who can fault an emergency room physician for an AI algorithm that misses papilledema—swelling of a part of the retina? An AI's failure to detect the condition could delay care and possibly cause a patient to lose their sight. Yet papilledema is challenging to diagnose without an ophthalmologist's examination.
AI is constantly self-learning, meaning it takes information and looks for patterns in it. It is a “black box,” which makes it challenging to know what variables contribute to its output. This further complicates the liability question. How much can you blame a physician for an error caused by an unexplainable AI? Shifting the blame solely to AI engineers does not solve the issue. Of course, the engineers created the algorithm in question. But could every Tesla Autopilot accident be prevented by more testing before product launch?
The key is to ensure that all stakeholders—users, developers and everyone else along the chain—bear enough liability to ensure AI safety and effectiveness, though not so much they give up on AI. To protect people from faulty AI while still promoting innovation, we propose three ways to revamp traditional liability frameworks.
First, insurers must protect policyholders from the costs of being sued over an AI injury by testing and validating new AI algorithms prior to use. Car insurers have similarly been comparing and testing automobiles for years. An independent safety system can provide AI stakeholders with a predictable liability system that adjusts to new technologies and methods.
Second, some AI errors should be litigated in courts with expertise in these cases. These tribunals could specialize in particular technologies or issues, such as dealing with the interaction of two AI systems (say, two autonomous vehicles that crash into each other). Such courts are not new: in the U.S., these courts have adjudicated vaccine injury claims for decades.
Third, regulatory standards from federal authorities such as the U.S. Food and Drug Administration or the National Highway Traffic Safety Administration could offset excess liability for developers and users. For example, federal regulations and legislation have replaced certain forms of liability for medical devices. Regulators ought to proactively focus on standard processes for AI development. In doing so, they could deem some AIs too risky to introduce to the market without testing, retesting or validation. This would allow agencies to remain nimble and prevent AI-related injuries, without AI developers incurring excess liability.
Industries ranging from finance to cybersecurity are on the cusp of AI revolutions that could benefit billions worldwide. But these benefits shouldn't be undercut by poorly developed algorithms: 21st-century AI demands a 21st-century liability system.