90% minimum: this is the part of human responsibility generally accepted in road accidents. In other words, the large share of accidents due to a driving error that could be avoided by a vehicle driving independently. No fatigue, no inattention, distraction, no hesitation…the artificial intelligence coordinates the information cross-checked from the various onboard sensors (cameras, radars, laser radars) and the external data, resulting from the communication with infrastructures and other vehicles, even pedestrians and their smartphones … An immense stream of data analyzed continuously and managed by a processor endowed with artificial intelligence to be predictive of the driving situations and behaviors of other users and even better avoid all risks by acting on the brakes, throttle, and steering. Ideal on paper, but so complex to put in place!
Autonomous car accidents: should we be afraid?
In the meantime, we are in a pivotal period of technological development, all communication infrastructures are not in place, far from it, and extensive road tests, whether in Europe or the United States, are only allowed. It is tough to let a car drive itself for hours while being ready to go into action in a fraction of a second when the time comes. This is precisely what failed in two different cases; two fatal accidents occurred a few days apart last March in the United States.
At the federal level, Congress has so far been unsuccessful in its effort to enact uniform safety legislation for the testing and deployment of self-driving cars. As a result, several states have proceeded to pass their safety regulations, and these impose varying degrees of responsibility (and liability) on manufacturers and owners of self-driving cars. For example:
In practical terms, this lack of a national safety standard means that if you’re injured in an accident with a self-driving car, your legal recourse against the manufacturer and owner of the vehicle may vary depending on the state in which the accident occurred.
The Human Element
In a typical car accident with a human driver at the wheel, the driver engages in some negligence, such as running a red light that causes a collision with another car. In this situation, the negligent driver is primarily liable for the injuries caused by the crash.
In some states, the car’s owner may be liable as well, as long as the vehicle was being driven with the owner’s knowledge and consent. And in situations where the collision is caused by some manufacturing defect in the car itself, anyone injured may be able to sue the manufacturer on a “product liability” theory of fault. So, there could be three potential avenues of recourse when you are injured in a conventional car accident: the offending driver, the car’s owner, and the car’s manufacturer (putting aside the potential financial responsibility of the respective car insurance carriers).
With self-driving cars that have no “driver” to sue, it would appear at first blush that your recourse is now limited to a suit against the car’s owner, operator, or manufacturer. But this is a rapidly-evolving area. In many situations, until the technology advances to the point where self-driving cars are fully autonomous, a human is still required to sit in the driver’s seat so that he or she can take over the controls as conditions present themselves — or in the alternative, a human remote operator is required to monitor the vehicle’s movement and take over as necessary. Which brings us to our next topic…
Testing Company Liability for Self-Driving Vehicle Accidents
As we saw in March 2018, when a self-driving Uber car struck and killed a pedestrian in Arizona, self-driving technology has not yet been perfected to the point where the car can sense, react to, and avoid a sudden and unexpected danger.
The upshot of this reality is that as long as self-driving cars require human assistance, those humans (whether sitting in the driver’s seat or monitoring the vehicle remotely) will remain potentially liable if their negligence contributes to a car accident. And if these human drivers/remote operators are employees of companies like Uber, Google (Waymo), or another company engaged in testing self-driving vehicles, the companies will be on the legal hook under established principles of employer liability for a car accident.
With regard to a manufacturer’s liability, some states have passed laws that deem the automated driving system to be the “driver” or “operator” of an autonomous vehicle for purposes of determining conformance to applicable rules of the road. These states require manufacturers of these vehicles to assume fault for each incident in which the automated driving system is at fault. Under this theory, if the automated driving system’s “negligence” causes an accident, the manufacturer assumes that negligence, and the legal liability that comes with it.
While this may give some comfort to persons injured by self-driving cars given the “deep pockets” of vehicle manufacturers, it’s still necessary to prove fault. And precisely what this means — whether showing some kind of flaw in the design or development of the automated driving system, or retrieving the vehicle’s event/data recorder to prove that the vehicle ran a red light — will have to be sorted out as cases make their way through the nation’s courts, and the legal possibilities evolve into legal principles.