feature article
Subscribe Now

Maybe You Can’t Drive my Car (Yet) Part 3

What Will our Robotic Masters Think When They Find Out That We Cheat?

“I’m not bad. I’m just drawn that way.” – Jessica Rabbit

In the last installment of this article series (see “Maybe You Can’t Drive My Car (Yet) Part 2”), the NTSB had just reported preliminary findings regarding the fatal crash of a Tesla Model S in Silicon Valley near the difficult intersection of US 101 and California State Highway (SH) 85. Briefly, the NTSB report said:

“According to performance data downloaded from the vehicle, the driver was using the advanced driver assistance features traffic-aware cruise control and autosteer lane-keeping assistance, which Tesla refers to as “autopilot.” As the Tesla approached the paved gore area dividing the main travel lanes of US-101 from the SH-85 exit ramp, it moved to the left and entered the gore area. The Tesla continued traveling through the gore area and struck a previously damaged crash attenuator at a speed of about 71 mph. The crash attenuator was located at the end of a concrete median barrier.”

In the conclusion to that article, I wrote:

“…the Tesla Autopilot produced three warnings in 19 minutes. The car knew and announced that it didn’t feel capable of driving in this traffic.

That tells me that we have yet to think through and solve the problem of car-to-human-driver handover—and that’s a big problem. Tesla’s Autopilot reportedly gives the driver at least two minutes of grace period between the time it asks for human driving help and the time it disengages the Autopilot. Mr. Huang’s repeated grabbing and releasing of the steering wheel suggests that it may be overly easy to spoof that two-minute period by holding the wheel for just a few seconds to reset the timeout.

A self-driving car that can be spoofed like this is a dangerous piece of technology, and yet I see no good solution to this problem.”

It seems that Tesla now has a partial solution to this problem.

On June 22, the San Jose Mercury News published a first-hand user report of a Tesla Model 3 EV, written by its senior business section Web editor Rex Crum. (See “Trading in ‘The Beast’ Ram truck for a Tesla Model 3: Oops, is it broken?”) Tesla loaned a Model 3 EV to Crum and he drove it instead of his Dodge Ram 1500 V-8, 5.7-liter hemi quad-cab truck for a couple of days. During his test drive, Crum intentionally experimented with Tesla’s Autopilot, forcing it to react to his hands-off driving just to see what the car would do.

Crum writes:

“Now, Tesla’s Autopilot feature doesn’t turn the Model 3 into a self-driving car. What it can do is hold the Model 3 steady in your driving lane, and change lanes. When we got onto Interstate 580 and sped up a bit, I double-tapped down on the shifter to engage Autopilot. I could feel the steering wheel take a little control away from my hands as the Model 3 reached my pre-set speed of 70 miles per hour. A set of blue lane markers appeared on the screen to show my car was in Autopilot mode and holding steady in the lane.

“However, to get a true feeling of Autopilot in the Model 3, I had to break Tesla’s rule — I took my hands off the steering wheel. I kept them hovering above the wheel during this experiment. And it didn’t take long for the Model 3 to tell me to put my hands back in place through a series of escalating notifications: Words on the screen, then a pulsing blue light on the screen meant to catch my eye about the hands-on alert, then the pulsing blue light and one loud “beep,” and, finally, after a brief pause, several beeps to go along with the blue light.

“And then, Autopilot turned itself off. And since my hands had been off the wheel for so long, the car told me Autopilot would remain off for the remainder of our drive.”

Consequences! Finally.

Crum’s crummy driving demonstrated to the Tesla Model 3 that he was not going to be a hands-on driver. After repeated warnings, the car first shut off and then disabled Autopilot for the remainder of the trip. That certainly eliminates one spoofing mode.

Unfortunately, it doesn’t seem to fix the whole problem. First, there seems to be a lot of leeway in terms of the multiple, escalating messages, warnings, and beeps before the Tesla decides that it’s got an inattentive driver behind the wheel. A crash could certainly result from all of these delays in retaking the wheel. Second, it sounds like the best way to continue to spoof the Tesla’s autopilot hands-on-the-wheel sensor is to pull over, stop, turn off the car, and then turn it back on to reset the Autopilot.

That’s certainly not something I’d do, but I drive daily in Silicon Valley and I’m guessing a lot of the drivers I see on the roads would do something like that. After all, they dive bomb from high-speed freeway lanes across one, two, or three slower traffic lanes to zoom down exits that they almost missed. They make right turns from the left lanes of surface streets. They make left turns to oncoming traffic at intersections, giving the shocked oncoming drivers the bird in the process and forcing them to slam on the brakes. They drive 20mph below the speed limit, and, when you pass these road boulders, you see they’re looking down at their phones. Deeply concentrating on the president’s latest tweets, no doubt.

Although all of this bad driving by humans makes me strongly desire the beginning of the driverless car era, my desire isn’t strong enough to match my skepticism that AI-infused “autopilots” can handle today’s streets at least as safely as human drivers. And yes, I know that’s not saying much. Simply reread the previous paragraph.

Spoofing automatically piloted cars here in Silicon Valley isn’t new. Pedestrians in Mountain View have been interacting with Google’s self-driving vehicles for years now and have learned that they can walk in front of such vehicles with abandon because the cars are programmed not to run over pedestrians, no matter what. The human animal seems purpose-designed to exploit loopholes in every system. (No, that’s not an argument for Intelligent Design.)

How do I know that people will continue to try to exploit loopholes in Tesla’s Autopilot? Case in point: CNN reported on a device called “Autopilot Buddy” on June 19, saying that “NHTSA also ordered the California company that makes the device, Dolder, Falco and Reese Partners LLC, to stop selling and marketing the Autopilot Buddy in the United States.”

What’s Autopilot Buddy? It’s a $200 “device” that appears to be nothing more than a two-piece, self-attaching weight that you clamp to your Tesla’s steering wheel to mimic the driver putting some light mechanical “resistance” on the wheel. The presence of resistance proves to Autopilot that there’s a human driver holding the wheel and it won’t disable itself.

But don’t worry, there’s a convenient caveat on the Autopilotbuddy.com Web site—in somewhat broken English:

Note: Autopilot Buddy is for Track Use Only. This is not marketed for “street use”.  “Autopilot Buddy” is installed you the driver assumes responsibility for the car.”

There’s also this sentence taken from the Web site:

“This device restricts built-in safety features of the car. Taking your hands off the wheel of any motor vehicle is dangerous.”

Are we there yet?

Nope.

Leave a Reply

featured blogs
Apr 25, 2024
Cadence's seven -year partnership with'¯ Team4Tech '¯has given our employees unique opportunities to harness the power of technology and engage in a three -month philanthropic project to improve the livelihood of communities in need. In Fall 2023, this partnership allowed C...
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

IoT Data Analysis at the Edge
No longer is machine learning a niche application for electronic engineering. Machine learning is leading a transformative revolution in a variety of electronic designs but implementing machine learning can be a tricky task to complete. In this episode of Chalk Talk, Amelia Dalton and Louis Gobin from STMicroelectronics investigate how STMicroelectronics is helping embedded developers design edge AI solutions. They take a closer look at the benefits of STMicroelectronics NanoEdge-AI® Studio and  STM32Cube.AI and how you can take advantage of them in your next design. 
Jun 28, 2023
34,417 views