One of the largest accident studies yet suggests self-driving cars may be safer than human drivers in routine circumstances – but it also shows the technology struggles more than humans during low-light conditions and when performing turns.
The findings come at a time when autonomous vehicles are already driving in several US cities. The GM-owned company Cruise is trying to restart driverless car testing after a pedestrian-dragging incident in March led California to suspend its operating permit. Meanwhile, Google spin-off Waymo has been gradually expanding robotaxi operations in Austin, Los Angeles, Phoenix and San Francisco.
“It is important to improve the safety of autonomous vehicles under dawn and dusk or turning conditions,” says Shengxuan Ding at the University of Central Florida. “Key strategies include enhancing weather and lighting sensors and effectively integrating sensor data.”
Ding and his colleague Mohamed Abdel-Aty, also at the University of Central Florida, pulled together data on 2100 accidents from California and the National Highway Traffic Safety Administration (NHTSA) involving vehicles equipped with some level of automated self-driving or driver assistance technologies. They also gathered data on more than 35,000 accidents involving unassisted human drivers.
Next, they used a statistical matching method to find pairs of accidents that occurred under similar circumstances, with shared factors such as road conditions, weather, time of day and whether the incident took place at an intersection or on a straight road. They focused this matching analysis on 548 self-driving car crashes reported in California – excluding less automated vehicles that only have driver assistance systems.
The overall results suggest autonomous vehicles “generally demonstrate better safety in most scenarios”, says Abdel-Aty. But the analysis also found self-driving cars had a crash risk five times as great as human drivers when operating at dawn and dusk, along with almost double the accident rate of human drivers when making turns.
One research roadblock is the “autonomous vehicle accident database is still small and limited”, says Abdel-Aty. He and Ding described the need for “enhanced autonomous vehicle accident reporting” – a major caveat echoed by independent experts.
“I think it is an interesting but extremely preliminary step towards measuring autonomous vehicle safety,” says Missy Cummings at George Mason University in Virginia. She described the numbers of self-driving car crashes as being “so low that no sweeping conclusions can be made” about the safety performance of such technologies – and warned of biased reporting from self-driving car companies. During her time at NHTSA, says Cummings, video footage of incidents did not always match companies’ narratives, which tended to paint human drivers as the ones at fault. “When I saw actual videos, the story was very different,” she says.
Some crashes do not get reported to the police if they only involve minor fender benders, and so any comparisons of autonomous vehicle crashes versus human driver crashes need to account for that factor, says Eric Teoh at the Insurance Institute for Highway Safety in Virginia. His 2017 study of Google’s early tests of self-driving cars found just three out of 10 specific crashes made it into police reports.
“Both California and NHTSA do not require comprehensive data reporting for autonomous vehicle testing and deployment,” says Junfeng Zhao at Arizona State University. “Autonomous vehicles – particularly robotaxis – often operate in particular areas and environments, making it difficult to generalise findings.”
Topics:
- artificial intelligence/
- driverless cars