IE 11 is not supported. For an optimal experience visit our site on another browser.

Robot Cars: Is It Time to Think Twice About Driverless Vehicles?

Virtually every major automaker has launched a self-driving vehicle program, as have start-ups including Tesla, along with high-tech giant Google.
Get more newsLiveon

It looks like it could have rolled off the set of the latest sci-fi flick, with its flexible body panels and glowing green orb, but the newly unveiled Vision Next 100 is BMW’s take on what the autonomous vehicle of the not-too-distant future may look like.

BMW isn’t alone. Virtually every major automaker has launched a self-driving vehicle program, as have start-ups including Tesla, along with high-tech giant Google. In fact, the Silicon Valley firm has been hiring 100s of new workers, many with manufacturing experience, leading industry observers to guess it may soon enter the automotive market.

Google is already considered by most experts the leader in autonomous vehicle technology. But an incident that occurred last month serves as a warning for the challenges the industry faces — and could set back efforts to bring self-driving technology to the marketplace by the end of this decade.

Google autonomous vehicle prototypes have been involved in more than a dozen crashes since they began testing on public roads in 2014. And until last month, those accidents were all blamed on the humans driving the other vehicle. But, for the first time, a collision was blamed directly on one of the self-driving prototypes.

Read More: Google is Hiring More People for Its Robot Cars (Hint: Not Drivers)

A number of the earlier crashes were blamed on drivers trying to do things like race through a yellow light while Google car prototypes strictly obeyed the law, hitting the brakes the moment the lights turned yellow.

The latest crash occurred when a stopped autonomous prototype attempted to nudge into traffic, striking a public transit bus moving at about 15 mph on a road near Google headquarters in Mountain View, California. “We clearly bear some responsibility,” the tech firm said in a blog post.

In response to questions about whether the crash will have an impact on its research, a Google spokesperson on Tuesday pointed to previous statements from the company on its program.

The incident comes at an awkward time for Google and other proponents of autonomous vehicle technology. California regulators recently released draft rules requiring vehicles to specifically have a human back-up driver ready and able to take control in the event of a problem. Google had been hoping to soon begin testing some prototypes that had no traditional driver controls, including pedals and a steering wheel.

Autonomous and traditional vehicles will have to share roads for decades to come, and that will require programming the technology to better understand how humans react in specific situations. For one thing, it might mean dashing through some yellow lights, rather than always coming to a screeching stop.

But that isn’t all that easy to accomplish. The Google car involved in the latest crash — a modified Lexus sedan — was, ironically, programmed to think more like human drivers. That’s more difficult than it might, at first, seem. And part of the problem is that robot cars will have trouble communicating with not only human drivers but also the pedestrians and bicyclists with whom they share the road.

“It’s vital for us to develop advanced skills that respect not just the letter of the traffic code but the spirit of the road,” Google noted in its monthly report on the autonomous vehicle program.

The BMW Vision Next 100 has one possible approach built in: a glowing orb it’s dubbed the “Companion.” It will change colors to simulate the nods and waves human drivers rely on. When it glows green, for example, a pedestrian will know it’s safe to enter a crosswalk. The concept vehicle’s head and taillights also are designed to signal whether the Vision Next 100 is operating in autonomous mode or being driven manually.

Read More: Ford's Fleet of Robot Test Cars Swells to Thirty Vehicles

Proponents of autonomous driving, including Mark Rosekind, the head of the National Highway Traffic Safety Administration, believe that the technology could eventually eliminate the crashes responsible for killing more than 30,000 Americans a year — and an estimated 1.2 million people worldwide. Rosekind is quick to point out that over 90 percent of all fatal crashes are the result of human error.

Not everyone is quite so confident. A study released by AAA this month found three-quarters of American drivers reluctant to ride in an autonomous vehicle.

“American drivers may be hesitant to give up full control,” cautioned John Nielsen, AAA’s managing director of Automotive Engineering and Repair.

The first fully autonomous vehicles aren’t expected to be on the road until the end of the decade, at the earliest. But if the recent Google crash is repeated it could lead both regulators and the public to think about slowing down.