Google’s Driverless Cars Run Into Problem: Cars With Drivers
The Fresh York Times
September 1, 2015
MOUNTAIN VIEW, Calif. — Google, a leader in efforts to create driverless cars, has run into an odd safety conundrum: humans.
Last month, as one of Google’s self-driving cars approached a crosswalk, it did what it was supposed to do when it slowed to permit a pedestrian to cross, prompting its “safety driver” to apply the brakes. The pedestrian was fine, but not so much Google’s car, which was hit from behind by a human-driven sedan.
Google’s fleet of autonomous test cars is programmed to go after the letter of the law. But it can be raunchy to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop totally and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.
It is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the largest challenges facing automated cars is blending them into a world in which humans don’t behave by the book. “The real problem is that the car is too safe,” said Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles.
“They have to learn to be aggressive in the right amount, and the right amount depends on the culture.”
Traffic wrecks and deaths could well plummet in a world without any drivers, as some researchers predict. But broad use of self-driving cars is still many years away, and testers are still sorting out hypothetical risks — like hackers — and real world challenges, like what happens when an autonomous car violates down on the highway.
For now, there is the nearer-term problem of blending robots and humans. Already, cars from several automakers have technology that can warn or even take over for a driver, whether through advanced cruise control or brakes that apply themselves. Uber is working on the self-driving car technology, and Google expanded its tests in July to Austin, Tex.
Google cars regularly take quick, evasive maneuvers or exercise caution in ways that are at once the most cautious treatment, but also out of step with the other vehicles on the road.
“It’s always going to go after the rules, I mean, almost to a point where human drivers who get in the car and are like ‘Why is the car doing that?’” said Tom Supple, a Google safety driver during a latest test drive on the streets near Google’s Silicon Valley headquarters.
Since 2009, Google cars have been in sixteen crashes, mostly fender-benders, and in every single case, the company says, a human was at fault. This includes the rear-ender crash on Aug. 20, and reported Tuesday by Google. The Google car slowed for a pedestrian, then the Google employee by hand applied the brakes. The car was hit from behind, sending the employee to the emergency room for mild whiplash.
Google’s report on the incident adds another twist: While the safety driver did the right thing by applying the brakes, if the autonomous car had been left alone, it might have braked less hard and traveled closer to the crosswalk, providing the car behind a little more room to stop. Would that have prevented the collision? Google says it’s unlikely to say.
There was a single case in which Google says the company was responsible for a crash. It happened in August 2011, when one of its Google cars collided with another moving vehicle. But, remarkably, the Google car was being piloted at the time by an employee. Another human at fault.
Humans and machines, it seems, are an imperfect mix. Take lane departure technology, which uses a beep or steering-wheel stimulation to warn a driver if the car drifts into another lane. A two thousand twelve insurance industry investigate that astonished researchers found that cars with these systems experienced a slightly higher crash rate than cars without them.
Bill Windsor, a safety pro with Nationwide Insurance, said that drivers who grew irritated by the beep might turn the system off. That highlights a clash inbetween the way humans actually behave and how the cars wrongly interpret that behavior; the car beeps when a driver moves into another lane but, in reality, the human driver is intending to switch lanes without having signaled so the driver, irked by the beep, turns the technology off.
Mr. Windsor recently experienced firsthand one of the challenges as sophisticated car technology clashes with actual human behavior. He was on a road tour in his fresh Volvo, which comes tooled with “adaptive cruise control.” The technology causes the car to automatically adapt its speeds when traffic conditions warrant.
But the technology, like Google’s car, drives by the book. It leaves what is considered the safe distance inbetween itself and the car ahead. This also happens to be enough space for a car in an adjoining lane to squeeze into, and, Mr. Windsor said, they often attempted.
Dmitri Dolgov, head of software for Google’s Self-Driving Car Project, said that one thing he had learned from the project was that human drivers needed to be “less idiotic.”
On a latest outing with Fresh York Times journalists, the Google driverless car took two evasive maneuvers that at the same time displayed how the car errs on the cautious side, but also how jarring that practice can be. In one maneuver, it swerved sharply in a residential neighborhood to avoid a car that was poorly parked, so much so that the Google sensors couldn’t tell if it might pull into traffic.
More jarring for human passengers was a maneuver that the Google car took as it approached a crimson light in moderate traffic. The laser system mounted on top of the driverless car sensed that a vehicle coming the other direction was approaching the crimson light at higher-than-safe speeds. The Google car instantaneously jerked to the right in case it had to avoid a collision. In the end, the oncoming car was just doing what human drivers so often do: not treatment a crimson light cautiously enough, tho’ the driver did stop well in time.
Courtney Hohne, a spokeswoman for the Google project, said current testing was loyal to “smoothing out” the relationship inbetween the car’s software and humans. For example, at four-way stops, the program lets the car inch forward, as the rest of us might, asserting its turn while looking for signs that it is being permitted to go.
The way humans often deal with these situations is that “they make eye contact. On the fly, they make agreements about who has the right of way,” said John Lee, a professor of industrial and systems engineering and pro in driver safety and automation at the University of Wisconsin.
“Where are the eyes in an autonomous vehicle?” he added.
But Mr. Norman, from the design center in San Diego, after years of urging caution on driverless cars, now welcomes quick adoption because he says other motorists are increasingly dissipated by cellphones and other in-car technology.
Witness the practice of Sena Zorlu, a co-founder of a Sunnyvale, Calif., analytics company, who recently spotted one of Google’s self-driving cars at a crimson light in Mountain View. She could not fight back the temptation to grab her phone and take a picture.
“I don’t usually play with my phone while I’m driving. But it was right next to me so I had to seize that chance,” said Ms. Zorlu, who posted the picture to her Instagram feed.