feature article
Subscribe Now

Xilinx Hits the Road with Daimler

SoCs to Power Automotive AI Applications

In what appears to be another win for Xilinx’s formidable Zynq SoC FPGA platform, Xilinx and Daimler announced a collaboration on “an in-car system using Xilinx technology for artificial intelligence (AI) processing in automotive applications.” We say this “appears” to be a win for Zynq because Zynq isn’t mentioned by name in the very-vague press release. It does say, however, that the solution is “powered by a Xilinx automotive platform consisting of system-on-a-chip (SoC) devices and AI acceleration software.” We believe the SoC devices will be some member(s) of the Zynq family – possibly the Zynq UltraScale+ MPSoC.

This is a reinforcement of the flexibility and utility of FPGAs – and particularly SoC FPGAs – in rapidly-evolving technology areas such as advanced driver assistance systems (ADAS) and autonomous driving (AD). Even though Xilinx has never called Zynq an “FPGA,” it clearly is one, and the primary differentiator between Zynq and other, more generic, SoCs is the clever and tight integration of FPGA fabric on the same chip with an ARM-based multi-core, multi-architecture processing subsystem.  

For advanced automotive applications, FPGA SoCs bring a double-whammy of benefits. First, the FPGA fabric allows a single part to be configured in arbitrary ways in terms of its interfaces. The FPGA can easily be configured to talk to any combination of sensors or other peripherals, and it can be reconfigured in system as that list of peripherals changes over time, or between similar systems. At this stage in the evolution of automotive technology, that list changes constantly, so the adaptable nature of FPGAs shines. 

Second, FPGA fabric itself is an effective vehicle for acceleration of key algorithms, and neural net inferencing is certainly a killer app for FPGA acceleration. While GPUs and other specialized processors own the training phase of deep-learning neural net applications because of their superior floating-point performance, FPGAs shine in the inferencing task because of their extreme efficiency in low-precision fixed-point processing. Once the training is done and the trained system is deployed in the field, the low power, low latency, and massive parallelism of FPGA fabric is a huge advantage. 

Third (OK, that actually makes it a triple-whammy, doesn’t it?) Xilinx’s SoC FPGAs have very robust IO capabilities with ample programmable multi-gigabit SerDes IO. In automotive applications such as ADAS and AD, some of the sensors generate copious amounts of data, and that data must be processed in real time with very low latency. The enormous IO bandwidth of these devices allows them to be placed closer to the edge – often right with the sensor or sensor subsystem – which reduces system latency and dramatically reduces the amount of data that has to be passed upstream. 

While the cost structure for automotive electronics favors ASSPs, standard parts, and ASICs, none of those technologies has the inherent flexibility of FPGAs in terms of their ability to deal with the wide variety of system components that will come along, the constant change in algorithms and system architecture, and the rapid evolution of software required to move automotive technology. As the current wave of automation matures, we will certainly see FPGA solutions replaced with more cost-effective alternatives. Until then, though, FPGAs (and again, particularly SoC FPGAs) should be powerful enablers. 

The beauty of the Zynq platform is the tight integration between its robust “normal” SoC features (processors, memory interfaces, peripherals, etc.) with the programmable fabric and FPGA-like IO. Zynq is a powerful (albeit absurdly complex) heterogeneous computing platform that allows an application to be optimally partitioned between various compute resources including multi-core applications processors, real time processors, graphics processors, and of course, FPGA-based accelerators. The challenge with Zynq, due to its complexity, is the development of applications that take advantage of all those resources in an optimal way. Partitioning an application to take advantage of FPGA acceleration generally requires a great deal of expertise in FPGA design as well as keen attention to the problem of getting data into and out of the accelerators. On top of that problem, creating memory architectures that allow all of the various computing resources to have the access they need is a daunting task.

The announcement with Daimler is extraordinarily lean on details, so we are left to assume and infer a great deal. Xilinx devices have been designed into cars for decades, so what makes this case special? The announcement appears to describe a fairly straightforward application of Zynq technology, which has been around for years. The nature of the collaboration is not described in any detail, but we could assume that Xilinx will supply Zynq devices and will work with Daimler on the task of mapping Daimler’s automotive AI applications to the Zynq architecture – a task which, as mentioned above, is crazy complicated and would probably benefit immensely from Xilinx’s expertise. 

Daimler is also working with Mobileye and Nvidia on AI-related development, so Xilinx appears to be joining a large field of vendors and technologies vying for sockets in the Daimler version of the next-generation automobile. We’d speculate that the flexibility of Xilinx’s solution will be especially important in the early days when the systems are in flux, and that the final mass-produced in-car systems will rely more on cost- and power-optimized components. Subsystems such as Lidar, for example, currently rely heavily on FPGAs, but will have to ditch the FPGAs eventually in favor of more cost-optimized solutions in order to meet the stringent cost requirements of mass production in automobiles.  

The release gives no timeline or expected deliverable from the collaboration, except to say that “Mercedes-Benz will productize Xilinx’s AI processor technology, enabling the most efficient execution of their neural networks.” Given automotive development cycles, this likely means it will be several years before we see Zynq devices doing AI inferencing in a Mercedes. However, we’d speculate that Zynq devices will already be performing various other tasks inside many cars much earlier – particularly in the realm of sensor fusion/aggregation and embedded vision.

Despite Xilinx’s formidable technical prowess, the company is enigmatic in their communications strategy of late, and that may be important. Xilinx marketing and PR have gone mostly silent in recent months, and the press release for this announcement is the first in two months from the company, (which has traditionally put out numerous releases each month). Even though Xilinx has been the world’s leading FPGA company for the past several decades, they now claim not to be an FPGA company at all. The previous release described a strategic move away from traditional FPGAs and into the data center with a “Data Center First” strategy. It also revealed that Xilinx plans to augment their SoC FPGAs even more (and change the name again) in the next generation 7nm family by adding full-fledged hardware network on chip (NoC). That future family, due to tape out in 2019 and probably begin volume shipments no earlier than 2020, is what Xilinx is calling “Adaptable Compute Acceleration Platform” (ACAP). The next release two months later is this one – about embedded technology for automotive AI (which is not at all “Data Center First”).  

There is evidence of some attempt to get the messaging to be consistent, however. Now that the company has said that their next FPGAs will be called ACAPs instead, they are retroactively applying the “adaptable” moniker to existing products such as Zynq. But poor Zynq has endured re-labeling repeatedly during the years we have known and admired it. While Altera (now Intel PSG) simply labeled their FPGAs with processors as “SoC FPGAs” and stuck with it, Xilinx marketing has played word salad with Zynq from the beginning – re-naming and re-classifying Zynq at every turn. Anybody remember “Extensible Processing Platform (EPP)?”  

Rumors continue that Xilinx is positioning themselves as a possible acquisition target. If so, a sharp drop in communications would be consistent with the “quiet period” we often see during pre-announcement stages of many acquisitions. Supporting that theory, Xilinx is actively positioning themselves away from their traditional identity as “leading FPGA company,” which put them squarely in a limited-growth potential position in a market expected to reach ~$10B by 2021, and instead painting a picture of the company as a potential winner in a possible shakeup of the (ten times larger) data center server market (predicted to hit $90B buy 2021) and in other high-visibility markets such as AI and automotive ADAS/AD. Finally, there is evidence of aggressive cost-cutting in the company, despite revenues being up. That sort of behavior is often characteristic of companies trying to inflate their profit margins in order to look more attractive to potential suitors.

It the coming months, it will be interesting to watch Xilinx for additional clues. One thing is certain, the company is playing its strategy and intentions very close to the vest.

Leave a Reply

featured blogs
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...
Apr 30, 2024
Analog IC design engineers need breakthrough technologies & chip design tools to solve modern challenges; learn more from our analog design panel at SNUG 2024.The post Why Analog Design Challenges Need Breakthrough Technologies appeared first on Chip Design....

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Altera® FPGAs and SoCs with FPGA AI Suite and OpenVINO™ Toolkit Drive Embedded/Edge AI/Machine Learning Applications

Sponsored by Intel

Describes the emerging use cases of FPGA-based AI inference in edge and custom AI applications, and software and hardware solutions for edge FPGA AI.

Click here to read more

featured chalk talk

ROHM Automotive Intelligent Power Device (IPD)
Modern automotive applications require a variety of circuit protections and functions to safeguard against short circuit conditions. In this episode of Chalk Talk, Amelia Dalton and Nick Ikuta from ROHM Semiconductor investigate the details of ROHM’s Automotive Intelligent Power Device, the role that ??adjustable OCP circuit and adjustable OCP mask time plays in this solution, and the benefits that ROHM’s Automotive Intelligent Power Device can bring to your next design.
Feb 1, 2024
12,998 views