Report on self-driving car accidents is in disturbance of good sense
Mashable
Report on self-driving car accidents is in disturbance of good sense
Writing a good news story is, to a certain extent, like driving in busy traffic. To do a good job of it you have to pay attention, go after the rules of the road and display some skill. It’s effortless to mess up — we humans are fallible, after all — and when we do, the results range from a fender bender-level embarrassment to a fatal collision-class career-ender.
The latest and widely distributed AP story by Justin Pritchard regarding four accidents since September involving self-driving cars sits fairly close to the fender-bender portion of that spectrum. There are no injuries, but the driver … er … author should very likely get a ticket for reckless writing.
The accidents were all in California, which requires self-driving car operators to report every single incident, according to Pritchard’s story. Three of the cars were from Google and one from a Delphi Automotive vehicle. With a reported forty eight self-driving cars tooling around the state, this sounds like an alarming statistic.
But before you embark marching on Google’s Mountain View, California, offices, pitchforks in mitt, requiring they dismantle all their autonomous vehicles: You should know about some of the other, relevant information buried in this report.
Very first there’s this:
The person familiar with the accident reports said the cars were in self-driving mode in two of the four accidents, all of which involved speeds of less than ten mph.
In other words, only two of the four reported accidents actually involved autonomous driving and in both cases, the cars were moving more leisurely than the average person can sprint.
We know that those two cars were Google self-driving vehicles because it has three of the cars involved in these traffic incidents and because of this other little tidbit sitting in the middle of the story:
“In the October accident involving Delphi, the front of its two thousand fourteen Audi SQ5 was moderately bruised when, as it waited to make a left turn, another car broadsided it, according to an accident report the company collective with AP. The car was not in self-driving mode, Delphi spokeswoman Kristen Kinley said.”
When I spoke to Kinley via email on Monday, she elaborated a bit. “Our car was stopped at a light when another car crossed the median broadsiding our car. Our car wasn’t in automated mode. Just sitting at a light. The other car was found at fault,” she wrote.
In other words, this scandalizing story is based on two autonomous cars bumping into two other cars with humans behind the wheel. The collisions were at ten mph, maximum, and Google contends that their robot cars were not at fault.
For Pritchard, however, this is enough. He brings in John M. Simpson, whose “Consumer Watchdog” web site promises to “investigate,” “advocate,” “mobilize” and “litigate.”
Simpson may not be a self-driving technology experienced, but he’s certainly no fan of Google. He gets to point out how Google would eventually like to see us in self-driving cars that lack pedals and steering wheels. This is true. Last year at Re/code’s Code Conference, Sergey Brin unveiled a self-driving car with no pedals and no steering wheel that could drive at twenty five mph. For safety, it features a big stop button.
Brin wants to have what could best be described at autonomous golf carts (25 mph, people) on the road in a duo of years, but Simpson clearly thinks they’re driving around California now.
That would mean a person has no power to intervene if a car lost control, making it “even more significant that the details of any accidents be made public — so people know what the heck’s going on.”
A noted above, even the future steering wheel- and pedal-free Google self-driving car would have a fail-safe, but why cloud your argument with facts?
Perhaps the numbers will suggest some support for Pritchard’s report. Not so much.
Pritchard’s story says that the National Highway Traffic and Safety Administration (NHTSA) reports 0.Three “property-damage-only-crashes” per 100,000 miles traveled. In the two thousand twelve report I found, the number was two hundred eighty one for one hundred million vehicle miles traveled There is another statistic here the report doesn’t mention: Three,049 Property-Damage-Only Crashes per 100,000 registered vehicles, which would put the accident rate for normal (non-self-driving) cars at 3%, or 10x what was cited.
Pritchard uses his somewhat questionable stat to point out that “Google’s three in about 140,000 miles may seem high.” Then he quickly, and most likely smartly, undermines his own argument by pointing out that millions of minor fender benders likely go unreported each year.
While Google wouldn’t talk specifically about the three incidents with its Lexus self-driving cars, a spokesperson did tell Mashable: “Safety is our highest priority. Since the embark of our program six years ago, we’ve driven almost a million miles autonomously, on both freeways and city streets, and the self-driving car hasn’t caused a single accident.”
Causation is the key word here, because no one, not even Google, is telling these self-driving cars have never been in an accident . Google maintains that they were all minor and usually the fault of the other driver.
Pritchard’s report makes this clear, but he still lets Simpson have the last word, with another head-scratcher about how “more might be expected of these test vehicles … than we might expect of a 17-year-old driver in a 10-year-old car.”
Yes, I would expect and I’m sure we’d find that the self-driving car is a much better driver than the 17-year-old. The robot car won’t speed or disregard the rules of the road. It won’t text or drink and drive. Its programming and sensors will force it to attempt to be the best possible driver and very likely do an even better job of treating unpredictable human drivers who are also on the open road.
This is not to say that safety is not a concern when it comes to self-driving cars. I’ve driven in one and even however there was a fellow sitting in the driver’s seat with his mitts poised over the steering wheel and feet a centimeter away from the pedals, I was startled — because it’s unacquainted territory, not because it was dangerous. The car, an Audi SUV retrofitted for autonomous driving by, yes, Delphi Automotive, drove flawlessly. It switched lanes, followed traffic signals and made left turns all better than my 17-year-old daughter who is learning to drive in a 10-year-old car.
As difficult as it is to believe, more self-driving cars could make the roads safer. A two thousand thirteen ENO Center for Transportation Explore, Preparing a Nation for Autonomous Vehicles, put the total number of crashes per year at Five.Five million with 93% of them being caused by humans. 31% of fatal crashes involved alcohol. 30% involved speeding and 21% were caused by a dispelled driver. “Self-driven vehicles would not fall prey to human failings, suggesting the potential for at least a forty percent fatal crash-rate reduction,” noted the report.
Of course, that same report notes, “While many driving situations are relatively effortless for an autonomous vehicle to treat, designing a system that can perform securely in almost every situation is challenging.” Which is why Google, Delphi, Daimler and others proceed to test and refine these cars and why there are still no “driverless” cars on the road.
Considering all the ways humans can fail at driving, it’s hard to argue that taking them out of the equation will not result in safer roads, but if we keep producing stories like this, ones that weave about the road of truth and information as if they were tipsy, that future may get shoved further and further down the road.