Liability laws around autonomous vehicles still evolving: Singer Kwinter’s Laya Witty

Jurisdiction, developing caselaw, technology advancements all play a role

In August of 2023, a series of automated and autonomous vehicles were involved in accidents in San Francisco. These accidents prompted the California Department of Motor Vehicles to require Cruise, the driverless vehicle taxi service, to immediately reduce the size of their fleet on the roads by 50%. While most of the accidents were minor and involved no injuries to people, and relatively minor property damage, they still had an impact on the city. One accident involved a fire truck on an active call which was struck by an automated vehicle, potentially affecting its ability to respond to an emergency. In another incident, there was a pileup of vehicles outside of a music festival, which halted traffic for hours.[1]

For years, the automotive industry has been touting automated vehicles as the transportation of the future. As this technology makes its way onto the streets of our cities, some interesting changes are happening in the way we drive, and the way we think about driving.

What is an Autonomous Vehicle?

The Society of Automotive Engineers, known as the SAE, have established a set of definitions for the levels of automation available in motor vehicles.[2] They range from 0, which is the vehicle with the least automated features, through levels 1, and 2, which include features that assist the driver, but still require that there be a person constantly in control of the vehicle. These automated features include cruise control, automatic emergency braking, blind spot warning, lane departure warning, lane centring and adaptive cruise control. As these functions only assist the driver, they are considered “driver support features” and not truly “automated driving” features.

Levels 3, 4 and 5 are progressively more completely automated. Level 3 automation requires that the driver take control of the vehicle under certain conditions. Obviously, the Level 3 vehicles must have a qualified driver, in the driver’s seat, attentive to the surroundings, and prepared to take over when required. Level 4 automation is only available under certain conditions but does not require the human driver to take control of the vehicle. These conditions allow for the vehicle to be driven only on routes with the appropriate signals and markers that the vehicle is equipped to read. A level 5 automated vehicle can operate under all conditions and will not require the human driver to control the vehicle under any conditions.

The major distinction between the “driver support” featured vehicles and “automated driving” vehicles is that the driver support levels rely on the driver to monitor the conditions, and the support features engage when they are called upon or upon a set of automated circumstances. Automated driving means that the vehicle’s automated systems are responsible for monitoring and responding to conditions.

How do Automated Vehicles Work?

Automated systems which control self-driving cars have several systems, all of which must communicate with and work together constantly and seamlessly to make things work.

On board the vehicle, there are several systems which are used to create a constantly updating map of the vehicle’s surroundings. Each of these systems will have multiple sensors located around the vehicle to be able to gather data from as many directions as possible. These are the “eyes” and “ears” of the vehicle:

  • Radar sensors monitor the position and speed of nearby vehicles.
  • Video cameras detect traffic lights, read road signs, track other vehicles, and look for pedestrians.
  • Lidar (light detection and ranging) sensors bounce pulses of light off the car’s surroundings to measure distances, detect road edges, and identify lane markings.
  • Ultrasonic sensors in the wheels detect curbs and other vehicles when parking.

The input from all of these sensing systems, and from the navigation system, is processed and integrated by the vehicle’s on-board software, which plots a path and makes the decisions. Hard-coded rules, obstacle avoidance algorithms, predictive modeling, and object recognition help the software follow traffic rules and navigate obstacles.[3] This is one part of the vehicle’s “brain”.

The other part of the vehicle’s brain is the navigation system. This may be located partly on board the vehicle and partly in a computer network. This may be controlled by a proprietary network or may interface with publicly available networks. The navigation system creates a map of the vehicle’s intended route. The systems which control the speed, braking and steering on the vehicle constantly feed data into, and receive instructions from, the navigation system.[4]

The computer then sends instructions to the car’s actuators, which means that the software in the computer signals the hardware of the vehicle to carry out its instructions.  These actuators are the components that control the acceleration, braking, and steering. In Stage 3 vehicles, these will operate with a person at the controls, in case they need to be taken over. However, some Stage 4 and all Stage 5 vehicles are capable of being operated as fully automated, with no need for a human driver.

Who drives better?

Cars driven by human drivers account for thousands of crashes every year. In Canada alone, almost 1,800 people died in road accidents in 2021[5]. Human drivers are prone to fatigue, distraction and the influence of substances while driving. Traveling in a vehicle with a human driver is clearly not 100% safe.

However, the experience of driving is very different for computers. The earliest versions of driving computers had trouble identifying and tracking motorcycles. Another unexpectedly difficult challenge for computers is distinguishing a parked vehicle from one that is stopped for traffic. Similarly, computers have difficulty predicting when another vehicle is likely to attempt a lane change, until the vehicle’s turn signal alerts them to that intention.

Recently, a driverless vehicle entered a construction zone by mistake and became mired in wet cement. Human drivers have certainly caused accidents of this type. In the accident mentioned at the beginning of this article, a driverless vehicle was hit by a fire truck which, as per its emergency protocols, was driving the wrong way for the lane it was in. The automated vehicle identified the fire truck but was unable to avoid it. It is not clear if a human driver would have been able to avoid that collision either.

If an Automated Vehicle is involved in a Motor Vehicle Accident, who is liable?

The liability in self-driving car accidents can be complex and involve various parties, including the car manufacturer, software developer, owner/operator of the vehicle, and potentially even the passengers.

  • Traditional Liability: In accidents involving self-driving cars, traditional liability principles would still apply. If the accident occurs due to the negligence of a human driver, they may be held liable for the damages caused, just as in accidents involving conventional vehicles. It is entirely possible that the automated vehicle would not be at fault for the accident, assuming that it was obeying all of the appropriate rules and procedures for the situation.
     
  • Manufacturer Liability: As self-driving technology becomes more advanced; the liability landscape is shifting. If a self-driving car's accident is caused by a defect in the vehicle's design or manufacturing, the manufacturer may be held responsible for the damages. As described above, the vehicles are controlled by a network of sensors, software and actuators. If the sensors are not providing enough information, or the software is not receiving it, perhaps the manufacturer of the vehicle would be found liable.
     
  • Software Developer Liability: The developers of the autonomous driving software may also bear some liability. If the accident occurred due to a flaw or error in the software programming, the developer could be held responsible for the resulting damages. The safety software that vehicles use have a list of rules and priorities that have to be taken into account, but there is always the possibility that the decisions as made by the software will cause an injury, and that liability could flow back to the developer.
     
  • Owner/Operator Liability: The liability may also extend to the owner or operator of the self-driving car. If they failed to properly maintain the vehicle, override the autonomous system when necessary, or engage in other negligent behavior that contributed to the accident, they may be held liable. The “Stage Three” automated vehicles particularly will require human intervention in some circumstances, and failure to intervene or to intervene in time could give rise to liability.

Some Early Lawsuits

In 2018, there was a fatal accident in Tempe, Arizona involving an Uber self-driving vehicle. The Uber self-driving program required that there always be a driver monitoring the car’s performance and ready to take over if something went wrong. The driver in this case, was watching a video on her phone and not paying sufficient attention to the road and vehicle. The vehicle struck and killed a pedestrian. The driver plead guilty to endangerment and was sentenced to probation. Uber settled the civil case against it with the victim’s family.[6]

In 2019, there was a fatality in a motor vehicle accident. A Tesla vehicle was being operated in self-driving mode when it encountered a truck which was slowing down because of the traffic ahead of it. The Tesla impacted the rear of the truck. The front seat passenger, a 15-year-old boy, was ejected and killed. The onboard records of the Tesla vehicle showed that it had not slowed down appropriately before the accident occurred.

Another 2019 case involved a self-driving taxi, which was carrying passengers at the time, and also had a driver who was supposed to be supervising the vehicle. It is alleged that the taxi went through an intersection on a red light. In doing so, it cut off the path of a person driving a motorized scooter and caused an accident. This case is most noteworthy because of the interplay of liability between the owner of the vehicle, which was a taxi fleet, and the employer of the driver, which was an agency that hired drivers for the taxis.

A 2020 case involved the injuries to the driver of a Tesla vehicle. As the vehicle was being operated in the automated driving mode, it allegedly hit a curb and caused injuries to its driver. At trial, the jury declined to award damages to the driver. The reason was that the driver was found to have been acting against Tesla’s warnings against using the automated mode on city streets. At the relevant time, Tesla recommended this mode for highway driving only.[7] 

A report from NBC news on March 29, 2023, indicates that the self-driving vehicles which are being tested on the roads of the San Francisco area are more frequently the victims of accidents than they are the cause of them. It appears that, with no driver in the other vehicle, drivers who are involved in a crash with an automated vehicle do not comply with their obligations to stop and exchange information with the owner of the other vehicle. They may not know how to contact the owner of the driverless vehicle, or perhaps do not think the accident is reportable with no injuries to another person.[8]

Automated Vehicles in Ontario

In 2016, Ontario began a 10-year program to allow the testing of automated vehicles on its roads. There are strict safety conditions in place for this testing, including a requirement that the vehicle has a driver on board.[9]

In 2019, this program was updated to allow for driverless automated vehicles, meaning that the vehicles do not need to have a driver aboard. These vehicles will have to comply with many other safety conditions. Among these is that the vehicles will have to carry a prominently visible sign warning that it is a “self-driving vehicle”.[10] All vehicles will be required to carry full liability insurance, with a minimum of $5 million dollars of coverage, $8 million if the vehicle has a seating capacity of 8 or more passengers.

How is the insurance industry addressing automated vehicles?

In 2018, the Insurance Bureau of Canada (IBC) published a report on the future of insurance and automated vehicles. In the report, the IBC makes three recommendations to prepare Canadian drivers for the introduction of self-driving vehicles:

  1. Establish a single insurance policy that covers both driver negligence and the automated technology to facilitate liability claims.
     
  2. Establish a legislated data-sharing agreement with vehicle manufacturers and vehicle owners and/or insurers to determine the cause of a collision.
     
  3. Update the federal vehicle safety standards with technology and cyber security standards.[11]

The first point is probably the most stereotypically Canadian. Rather than have a lawsuit which pits the manufacturer, the owner, and possibly also the driver’s insurance policies against each other, the industry will develop a framework for managing the claims, and a hybrid policy which will be statute-bound to respond to claims involving collisions with or damage to an automated vehicle. 

As we get closer to the time when fully automated, self-driving vehicles will be introduced to the Canadian market, the conversations surrounding safety, insurance and the regulation of these vehicles will surely intensify.

Conclusion

The liability laws regarding self-driving vehicles are very much still developing. The insurance regulations will vary by the local jurisdictions, and may change to follow the developing caselaw, and the advancements in the technology. Therefore, it's essential for those involved in self-driving car accidents to consult with legal professionals who specialize in this area to understand their rights and responsibilities.


[11] https://www.otip.com/why-otip/news/Self-driving-vehicles-and-the-future-of-car-insura

***

Laya Witty has devoted her entire career to representing injured people and their claims. While a student at Osgoode Hall Law School, she provided research and litigation support to a personal injury boutique firm. She then completed her articles at a prominent Toronto firm and progressed to leading the litigation department in a larger personal injury firm. Laya’s practice is focused on litigation against insurance companies, both in Tort and Accident Benefits claims.

Over the course of more than a decade, Laya has represented clients at many proceedings in the Superior Court of Ontario, Financial Services Commission of Ontario (now the Financial Services Regulatory Authority of Ontario), and the Licensing Appeals Tribunal. She has successfully settled several large cases.

Laya brings her passion for advocating for the individual to her work at Singer Kwinter. Whether she is representing a client against their own insurance company or against another party, she is always focused on the experience of the person, their story, their losses, and their needs. She is committed to helping her clients and achieving the best possible results for them.

Laya was called to the Bar in 2012. She is an active volunteer, respected speaker, and teacher in her community.

Firm(s)