IE 11 is not supported. For an optimal experience visit our site on another browser.

Driver safety systems can avoid — and cause — crashes, study shows

When tech helps drivers handle routine tasks, motorists have a tendency to become less vigilant — and may fail to respond if there’s an emergency.

Self-driving technology can lead to catastrophic results, especially if motorists aren’t prepared to instantly retake manual control, according to a new study by the Insurance Institute for Highway Safety.

During a series of on-road and track tests for four manufacturers — BMW, Mercedes-Benz, Tesla, and Volvo — IIHS researchers found vehicles failing at times to stop because the technology didn’t recognize a potential obstacle. In other situations, the vehicles actually would have steered into a crash if the driver didn’t intervene.

The insurance trade group previously reported that safety systems, like Blind Spot Detection and Lane Keeping Assist, appear to be helping prevent some crashes and injuries, especially at low speeds and in heavy traffic. But the new study, titled “Safety Check,” raises concerns about the wave of even more advanced, semi-autonomous technology now being added to more and more new vehicles.

“If the systems seem too capable, drivers may not give them the attention required to use them safely,” said IIHS chief researcher David Zuby.

When technology helps drivers handle routine tasks, like automatically adapting to the flow of traffic and keeping vehicles in their lanes, motorists have a tendency to become less vigilant. And drivers may fail to respond when there’s an emergency.

During a series of on-road tests, IIHS engineers found that on a number of occasions the BMW, Mercedes and Volvo models failed to respond to stopped vehicles under some situations, despite having been rated as “Superior” in previous tests run by the trade group. Both Tesla’s Model S and newer Model 3 hit a balloon used as a target under some circumstances.

The study is borne out by reality: Last January, a Model S sedan slammed into the back of a firetruck stopped on a California highway, despite being operated in Autopilot mode. The system “proved no better at avoiding the same mistakes human drivers might make,” said David Aylor, IIHS director of active safety testing.

Image: Investigators at the scene of a fatal accident involving a self-driving Uber car on the street in Tempe, Arizona.
Investigators at the scene of a fatal accident involving a self-driving Uber car on the street in Tempe, Arizona.ABC-15.com / via AP

The authors of the new study also raise a caution flag when it comes to expanded testing of fully autonomous vehicles on public roads. A summary of the report points to a fatal crash last March in which a prototype operated by ride-sharing service Uber hit and killed a pedestrian in the Phoenix suburb of Tempe.

“The Uber crash in Arizona that took the life of a pedestrian in March shows the hazards of beta testing self-driving vehicles on public roads,” the report emphasized.

Automakers come in for some of the blame for the problems with the new driver assistance systems. The IIHS pointed to names like Tesla’s Autopilot and Volvo’s Pilot Assist which, the authors contend, encourage owners to believe they can fully relinquish driving duties. And while it’s true that the owners’ manuals usually explain the limits of these technologies, few motorists take the time to read those manuals. But that, said the IIHS, can be a recipe for disaster when the driver assistance systems fail.