“When a dog bites a man, that is not news, because it happens so often. But if a man bites a dog, that is news.” – attributed to at least three journalists and publishers including UK publisher Alfred Harmsworth, John B. Bogart (editor, New York Sun), and Charles Anderson Dana.
National Highway Traffic Safety Administration (NHTSA) data shows that 37,461 people were killed in 34,436 motor vehicle crashes in the US in 2016. That’s an average of 102 deaths attributable to car crashes every 24 hours or about four per hour, every hour of every week of every month. Despite the carnage, most of us get into our cars every day, turn the key (or push the button) to start the car, and head out into the mean streets with not a second thought of our possible impending doom.
Perhaps that’s because the number of deaths is miniscule compared to the number of passenger hours in vehicles every year. NHTSA measures passenger hours in terms of vehicle miles traveled (VMT), and the number of deaths per VMT has steadily dropped since… well, since General Motors and Ford were founded. Even the total number of crash-related deaths in the US has dropped since 1975, despite the rapid growth in the number of vehicles on the road, until that trend reversed in 2015. (More on that later.)
Here’s NHTSA’s graph, telling all:
So people dying in car crashes is not news, because it happens every day. Unless those crashes are caused by self-driving cars.
A Series of Unfortunate Accidents
In 2018, at least two of those deaths will be connected to self-driving vehicles, thanks to two very unfortunate accidents. One such accident took place at night on March 18. It occurred on an uncrowded but dark stretch of divided highway in Tempe, Arizona. A special Uber-equipped Volvo XC90 collided with a woman walking her bicycle across the road. She was subsequently declared dead. The Uber vehicle was driving autonomously.
The Tempe police department released video from the car’s dash cam recorded just prior to and during the accident in a Tweet. The dash cam simultaneously recorded the view of the road ahead and an interior view of the safety driver. (You’ll find all sorts of copies of this video from various news agencies posted to YouTube.) The dash cam footage shows that the pedestrian was in shadow as she crossed the street and she becomes visible in the video only one second or so before the car hits her. The car does not appear to slow in the video until after the collision and it appears that the impact caused the safety driver to finally look up at the road from what appears to be a cell phone or tablet held at lap level.
(You’ll have a hard time convincing me that the spike in collision-caused deaths since 2015 that appears on NHTSA’s graph is not caused by cell phone-induced driver distraction. I see way too much of it every day on the streets of Silicon Valley to believe otherwise.)
According to a March 29 article in the San Jose Mercury News, the attorney for the husband and daughter of the Tempe crash victim has said that the “matter has been resolved.”
On March 23, less than a week after the Arizona incident, a Tesla Model X SUV belonging to Walter Huang was negotiating the connector ramp between Highways 85 and 101 in Mountain View, California. These are both high-speed divided roads. Huang was behind the wheel when the car slammed into a concrete divider on the ramp. The force of the impact tore the entire front end off of the Tesla, all the way back to and including the dashboard. Huang was killed in the crash.
At first, it was not clear whether this accident was the result of human error or machine error. However, after analyzing data saved in the Model X SUV’s black-box recorder, Tesla released the following information in a March 30 blog post:
“In the moments before the collision, which occurred at 9:27 a.m. on Friday, March 23rd, Autopilot was engaged with the adaptive cruise control follow-distance set to minimum. The driver had received several visual and one audible hands-on warning earlier in the drive and the driver’s hands were not detected on the wheel for six seconds prior to the collision. The driver had about five seconds and 150 meters of unobstructed view of the concrete divider with the crushed crash attenuator, but the vehicle logs show that no action was taken.”
The same Tesla blog post also says:
“In the US, there is one automotive fatality every 86 million miles across all vehicles from all manufacturers. For Tesla, there is one fatality, including known pedestrian fatalities, every 320 million miles in vehicles equipped with Autopilot hardware. If you are driving a Tesla equipped with Autopilot hardware, you are 3.7 times less likely to be involved in a fatal accident.”
Fixes: Some will be Easier than Others
These unfortunate and fatal accidents indicate a number of engineering failures. Some will be easier to rectify. Some will be darn hard to fix.
Let’s start with the easy stuff. The Uber-modified Volvo SC90 automobile reportedly bristles with sensors including seven video cameras, ten radar sensors, and a lidar. The lidar appears to be a Velodyne HDL-64E sensor with 360-degree coverage and a 120-meter range. Based on photos of Uber’s roof-mounted sensor package, at least three of the video cameras face forward to monitor the situation in front of the vehicle. According to an Uber infographic, the radar units provide 360-degree surrounding coverage for obstacle detection.
With that sort of sensor firepower, the Uber-modified Volvo should have stopped short. Information from any of these sensors should have been able to prevent this accident long before the collision occurred. Lidar and radar are active sensors. They emit energy (light or microwaves) and detect any energy reflected back by potential obstacles. Radar and lidar are unimpaired by darkness. The dashcam video from the Volvo suggests that the vehicle’s visible-light video cameras might have been impaired by the darkness, but several good Samaritans who drove the same stretch of road in Tempe at night after the accident have posted videos shot with cellphone cameras that suggest that the road’s not as dark as the Volvo’s dashcam portrays.
Something appears to have gone very wrong between the collection of the sensor data and its use to stop the car. That’s a comparatively easy thing for engineers to find and fix.
To Be or Not to Be (Paying Attention)
At this point, it’s important to remember that Uber’s autonomous vehicles are still being tested and improved. They’re in beta. They need to be tested in real-world situations, and it’s unfortunate that these tests put people at some level of risk. However, people are also at some level of risk just by getting into their cars and driving themselves to work, as the NHTSA data above proves.
And that brings us to the much harder problem. In both the Uber and Tesla accidents, there was clearly some level of human failure. In the case of the Uber accident in Tempe, the safety driver appears not to have been paying attention. In the moments leading up to the collision, the dashcam video shows that the safety driver is not providing additional safety.
In the case of the Tesla crash, the black box recorder data from the doomed Model X SUV shows that the vehicle knew there was some sort of a problem because it reportedly tried to get the human driver to take control for several seconds before the accident occurred. The black box data indicates that the human driver’s hands were not on the wheel for at least six seconds before the fatal crash. This is one of the most common types of auto accidents in the United States.
A March 27 Tesla blog post states:
“Our data shows that Tesla owners have driven this same stretch of highway with Autopilot engaged roughly 85,000 times since Autopilot was first rolled out in 2015 and roughly 20,000 times since just the beginning of the year, and there has never been an accident that we know of. There are over 200 successful Autopilot trips per day on this exact stretch of road.”
In both cases, for the Volvo and Uber vehicles, the human driver apparently did nothing to prevent the crash. This should not be a surprise. Drivers are alert precisely because they are driving. The act of piloting a vehicle through traffic helps to keep a driver alert. When the autonomous driving controls take over, it’s inevitable that people behind the wheel will eventually be distracted. They might even doze off from boredom. It’s human nature.
For military jet fighter pilots, the answer is rigorous training. That’s not going to work for the billions of automobile drivers on the planet. A few days of driving around Silicon Valley firmly establishes that there are plenty of human drivers on the road today who have forgotten a lot of the meager driver training they may once have gotten. They don’t signal. They dive bomb across several lanes of traffic to catch an exit in time. They weave in traffic and drive 20 MPH below the speed limit while checking social media or email and texting. I don’t see driver training getting rigorous enough to fix the human condition in the short- or medium-term future.
One technological fix for this problem that will most certainly fail is for the autonomous car to measure driver alertness. Often, autonomous driving systems rely on an interior video camera aimed at the person behind the steering wheel. If the eyes of the person in the driver’s seat are pointed straight ahead at least most of the time, the inference is that it’s likely the person is paying attention to road conditions. However, this metric infers driver alertness. It by no means guarantees that the driver is sufficiently alert to prevent an accident.
Safe at any speed?
Autonomous cars need to be safe enough to drive themselves. Period.
Are they already?
It depends on your definition of “safe.” In the aftermath of the Tempe accident, Uber suspended its autonomous car program around the world; Nvidia halted public tests of its hardware in autonomous vehicles (including Uber’s); and Toyota also temporarily suspended its autonomous vehicle program. It appears that Uber’s, Nvidia’s, and Toyota’s answers to this question are all “No.” The cars are not yet safe enough to release to the public.
Tesla is in a different situation entirely. Tesla’s vehicles with the Autopilot feature are already being sold to the public and have been for years. Tesla first offered the Autopilot feature on October 9, 2014. Since then, some of Tesla’s vehicles have been involved in fatal crashes while in Autopilot mode but Tesla’s own data suggests that Tesla’s vehicles are safer than human drivers under certain conditions.
In that same March 30 blog, Tesla wrote:
“Over a year ago, our first iteration of Autopilot was found by the U.S. government to reduce crash rates by as much as 40%. Internal data confirms that recent updates to Autopilot have improved system reliability…
The consequences of the public not using Autopilot, because of an inaccurate belief that it is less safe, would be extremely severe. There are about 1.25 million automotive deaths worldwide. If the current safety level of a Tesla vehicle were to be applied, it would mean about 900,000 lives saved per year.”
From Tesla’s March 30 blog post quoted above, the Tesla Autopilot is far less likely to kill you than you are—with one obvious caveat. There aren’t nearly enough Tesla vehicles on the road to make this generalized statement with statistical confidence.
Bottom line: There are still some mighty big engineering obstacles to overcome with autonomous vehicles before the general public will consider them “safe,” but the advent of widespread autonomous driving at some time in the future appears inevitable.
I, for one, welcome our new vehicular overlords (once they’re improved).
Unless they start using Facebook.