Letting go of the steering wheel for the first time will be a terrifying milestone for most drivers. As engineers, we have all known for years that self-driving and assisted-driving cars were coming, and as a group we have a unique appreciation for the myriad challenges – both technical and social – that lie between us and safer roads.
On the technical side, it is clear that a robust, safe self-driving system requires the aggregation of massive amounts of data from a diverse array of sensors, and the software that processes those inputs will be complex, performance-demanding, and in a high state of flux for many years. That means we need an unfortunate combination of massive sensor aggregation bandwidth, raw data processing, and algorithmic compute performance that can not easily be solved by any current combination of conventional processors and ASSPs.
It’s time to take some FPGAs to driving school.
As this article goes to virtual press, an Audi A7 is self-driving its way along a 550-mile route from the San Francisco Bay area to the 2015 Consumer Electronics Show (CES) in Las Vegas. So, if you’re sitting back in the safety of your lab chair thinking that all this is a rhetorical exercise for some unlikely future scenario – well, welcome to the future.
Advanced Driver Assistance Systems (ADAS) are the next big wave in automotive technology. Features of ADAS range from emergency automatic braking to adaptive cruise control and lane departure warnings to full-blown auto-drive capabilities. Most systems rely on a distributed architecture – different modules are added for each new capability, and those electronic control modules communicate with each other using various networking standards. Audi, however, is using a centralized control box that aggregates and processes all sensor data for all of the various ADAS features such as parking, night vision, lane departure, and even fully automated driving, which Audi calls “piloted driving.”
For piloted driving, there are three primary sensor types doing the heavy lifting – vision (cameras distributed in various locations around the car), radar, and laser. Signal data from all of these sensors has to be processed, filtered, aggregated, and put into a form where an applications processor running the higher-level algorithms can quickly determine context (where the vehicle is and what it is doing) and apply it to the task at hand – driving the car.
Altera and Audi have just announced that Altera SoC FPGAs are being used to perform these critical functions in Audi’s new zFAS central driver assistance control unit. SoC FPGAs (or, as we call them, Heterogeneous Integrated Processing Platforms (HIPPs)) bring a unique combination of programmable LUT fabric and high-performance conventional processors all on one chip. This makes the HIPP an extremely fast, flexible, heterogeneous processor that can do complex compute-intensive tasks with very low power consumption.
The Audi zFAS is being jointly developed by Audi and TTTech. They are using Altera Cyclone V SoC FPGAs, which contain two ARM A9 processors, FPGA fabric, DSP blocks, and complex programmable IO. The Cyclone device is doing sensor fusion – processing the radar and video streams together – as well as implementing a Deterministic, time-triggered Ethernet switch that enables reliable high-speed communication between various subsystems.
In the fast-moving ADAS world, scalability and reconfigurability are critical. New algorithms and updated sensors and displays arrive at a dizzying pace, and the flexibility of FPGAs is required to adapt to the various configurations of hardware as well as to the various levels of features demanded by different auto models. This makes ADAS a “killer app” for FPGAs, and particularly HIPPs such as the Cyclone V SoC FPGA.
As with many compute-acceleration applications of FPGAs, the big challenge for system designers is programming the FPGA. This is where higher-level languages and more abstract design methodologies such as model-based design, high-level synthesis, and other algorithmic design flows come into the picture. In Altera’s case, the Cyclone V SoC FPGA can be programmed using Altera’s OpenCL implementation. Designers can write OpenCL code (about the same as would be used for a GPU-based implementation) and compile into a high-performance FPGA implementation. In the ADAS arena, Altera implemented a dense optical flow algorithm in OpenCL and implemented it on the Cyclone SoC FPGA. The company says development required less than three weeks, whereas an RTL implementation would have required several months of development time. The resulting implementation required approximately 55K LUTs, or half of the fabric on a 110K Cyclone FPGA.
For those who may be wondering why the humble Cyclone V is being tapped for this application when the company produces much more capable devices such as the mid-range Arria and high-end Stratix device families, there are actually several answers. First, Cyclone V is able to meet the performance requirements of these ADAS’ generation 1 applications. Second, Cyclone is the only automotive-qualified family in Altera’s lineup, and the wire-bond packaging of Cyclone is the current standard in the automotive world – versus the more sophisticated flip chip packaging used in higher-end FPGAs. Finally, even in a system as expensive as an automobile, BOM cost is a huge barrier. The cost structure for even high-end luxury automobiles doesn’t lend itself to the use of Stratix-class FPGAs.
As we go to press, the Audi A7 demo vehicle (dubbed “Jack”) has successfully completed its voyage from Silicon Valley to Las Vegas for the 2015 Consumer Electronics Show. Jack employs long-range forward radar along with rear- and side-facing radar sensors. The radar is backed up by a laser LIDAR scanner on the front as well as a front-mounted 3D camera and four additional cameras at the corners of the car. The zFAS systems FPGAs were most likely scooping up plenty of data from the desert floor as the A7 made its way toward Sin City.
Elon Musk (Tesla) is on record saying that self-driving car technology will be ready for production by 2016 (Yep, that’s next year, folks.) Even if Musk’s estimate turns out to be overly optimistic, we’ll undoubtedly be sharing the roads with robots in the not-too-distant future, and FPGAs will be a big part of that equation. It should make the world a safer and happier place.