feature article
Subscribe Now

Maybe You Can’t Drive my Car (Yet) Part 3

What Will our Robotic Masters Think When They Find Out That We Cheat?

“I’m not bad. I’m just drawn that way.” – Jessica Rabbit

In the last installment of this article series (see “Maybe You Can’t Drive My Car (Yet) Part 2”), the NTSB had just reported preliminary findings regarding the fatal crash of a Tesla Model S in Silicon Valley near the difficult intersection of US 101 and California State Highway (SH) 85. Briefly, the NTSB report said:

“According to performance data downloaded from the vehicle, the driver was using the advanced driver assistance features traffic-aware cruise control and autosteer lane-keeping assistance, which Tesla refers to as “autopilot.” As the Tesla approached the paved gore area dividing the main travel lanes of US-101 from the SH-85 exit ramp, it moved to the left and entered the gore area. The Tesla continued traveling through the gore area and struck a previously damaged crash attenuator at a speed of about 71 mph. The crash attenuator was located at the end of a concrete median barrier.”

In the conclusion to that article, I wrote:

“…the Tesla Autopilot produced three warnings in 19 minutes. The car knew and announced that it didn’t feel capable of driving in this traffic.

That tells me that we have yet to think through and solve the problem of car-to-human-driver handover—and that’s a big problem. Tesla’s Autopilot reportedly gives the driver at least two minutes of grace period between the time it asks for human driving help and the time it disengages the Autopilot. Mr. Huang’s repeated grabbing and releasing of the steering wheel suggests that it may be overly easy to spoof that two-minute period by holding the wheel for just a few seconds to reset the timeout.

A self-driving car that can be spoofed like this is a dangerous piece of technology, and yet I see no good solution to this problem.”

It seems that Tesla now has a partial solution to this problem.

On June 22, the San Jose Mercury News published a first-hand user report of a Tesla Model 3 EV, written by its senior business section Web editor Rex Crum. (See “Trading in ‘The Beast’ Ram truck for a Tesla Model 3: Oops, is it broken?”) Tesla loaned a Model 3 EV to Crum and he drove it instead of his Dodge Ram 1500 V-8, 5.7-liter hemi quad-cab truck for a couple of days. During his test drive, Crum intentionally experimented with Tesla’s Autopilot, forcing it to react to his hands-off driving just to see what the car would do.

Crum writes:

“Now, Tesla’s Autopilot feature doesn’t turn the Model 3 into a self-driving car. What it can do is hold the Model 3 steady in your driving lane, and change lanes. When we got onto Interstate 580 and sped up a bit, I double-tapped down on the shifter to engage Autopilot. I could feel the steering wheel take a little control away from my hands as the Model 3 reached my pre-set speed of 70 miles per hour. A set of blue lane markers appeared on the screen to show my car was in Autopilot mode and holding steady in the lane.

“However, to get a true feeling of Autopilot in the Model 3, I had to break Tesla’s rule — I took my hands off the steering wheel. I kept them hovering above the wheel during this experiment. And it didn’t take long for the Model 3 to tell me to put my hands back in place through a series of escalating notifications: Words on the screen, then a pulsing blue light on the screen meant to catch my eye about the hands-on alert, then the pulsing blue light and one loud “beep,” and, finally, after a brief pause, several beeps to go along with the blue light.

“And then, Autopilot turned itself off. And since my hands had been off the wheel for so long, the car told me Autopilot would remain off for the remainder of our drive.”

Consequences! Finally.

Crum’s crummy driving demonstrated to the Tesla Model 3 that he was not going to be a hands-on driver. After repeated warnings, the car first shut off and then disabled Autopilot for the remainder of the trip. That certainly eliminates one spoofing mode.

Unfortunately, it doesn’t seem to fix the whole problem. First, there seems to be a lot of leeway in terms of the multiple, escalating messages, warnings, and beeps before the Tesla decides that it’s got an inattentive driver behind the wheel. A crash could certainly result from all of these delays in retaking the wheel. Second, it sounds like the best way to continue to spoof the Tesla’s autopilot hands-on-the-wheel sensor is to pull over, stop, turn off the car, and then turn it back on to reset the Autopilot.

That’s certainly not something I’d do, but I drive daily in Silicon Valley and I’m guessing a lot of the drivers I see on the roads would do something like that. After all, they dive bomb from high-speed freeway lanes across one, two, or three slower traffic lanes to zoom down exits that they almost missed. They make right turns from the left lanes of surface streets. They make left turns to oncoming traffic at intersections, giving the shocked oncoming drivers the bird in the process and forcing them to slam on the brakes. They drive 20mph below the speed limit, and, when you pass these road boulders, you see they’re looking down at their phones. Deeply concentrating on the president’s latest tweets, no doubt.

Although all of this bad driving by humans makes me strongly desire the beginning of the driverless car era, my desire isn’t strong enough to match my skepticism that AI-infused “autopilots” can handle today’s streets at least as safely as human drivers. And yes, I know that’s not saying much. Simply reread the previous paragraph.

Spoofing automatically piloted cars here in Silicon Valley isn’t new. Pedestrians in Mountain View have been interacting with Google’s self-driving vehicles for years now and have learned that they can walk in front of such vehicles with abandon because the cars are programmed not to run over pedestrians, no matter what. The human animal seems purpose-designed to exploit loopholes in every system. (No, that’s not an argument for Intelligent Design.)

How do I know that people will continue to try to exploit loopholes in Tesla’s Autopilot? Case in point: CNN reported on a device called “Autopilot Buddy” on June 19, saying that “NHTSA also ordered the California company that makes the device, Dolder, Falco and Reese Partners LLC, to stop selling and marketing the Autopilot Buddy in the United States.”

What’s Autopilot Buddy? It’s a $200 “device” that appears to be nothing more than a two-piece, self-attaching weight that you clamp to your Tesla’s steering wheel to mimic the driver putting some light mechanical “resistance” on the wheel. The presence of resistance proves to Autopilot that there’s a human driver holding the wheel and it won’t disable itself.

But don’t worry, there’s a convenient caveat on the Autopilotbuddy.com Web site—in somewhat broken English:

Note: Autopilot Buddy is for Track Use Only. This is not marketed for “street use”.  “Autopilot Buddy” is installed you the driver assumes responsibility for the car.”

There’s also this sentence taken from the Web site:

“This device restricts built-in safety features of the car. Taking your hands off the wheel of any motor vehicle is dangerous.”

Are we there yet?

Nope.

Leave a Reply

featured blogs
Sep 30, 2022
When I wrote my book 'Bebop to the Boolean Boogie,' it was certainly not my intention to lead 6-year-old boys astray....
Sep 30, 2022
Wow, September has flown by. It's already the last Friday of the month, the last day of the month in fact, and so time for a monthly update. Kaufman Award The 2022 Kaufman Award honors Giovanni (Nanni) De Micheli of École Polytechnique Fédérale de Lausanne...
Sep 29, 2022
We explain how silicon photonics uses CMOS manufacturing to create photonic integrated circuits (PICs), solid state LiDAR sensors, integrated lasers, and more. The post What You Need to Know About Silicon Photonics appeared first on From Silicon To Software....

featured video

PCIe Gen5 x16 Running on the Achronix VectorPath Accelerator Card

Sponsored by Achronix

In this demo, Achronix engineers show the VectorPath Accelerator Card successfully linking up to a PCIe Gen5 x16 host and write data to and read data from GDDR6 memory. The VectorPath accelerator card featuring the Speedster7t FPGA is one of the first FPGAs that can natively support this interface within its PCIe subsystem. Speedster7t FPGAs offer a revolutionary new architecture that Achronix developed to address the highest performance data acceleration challenges.

Click here for more information about the VectorPath Accelerator Card

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

Current Sense Amplifiers: What Are They Good For?

Sponsored by Mouser Electronics and Analog Devices

Not sure what current sense amplifiers are and why you would need them? In this episode of Chalk Talk, Amelia Dalton chats with Seema Venkatesh from Analog Devices about the what, why, and how of current sense amplifiers. They take a closer look at why these high precision current sense amplifiers can be a critical addition to your system and how the MAX40080 current sense amplifiers can solve a variety of design challenges in your next design. 

Click here for more information about Maxim Integrated MAX40080 Current-Sense Amplifiers