feature article
Subscribe Now

Xilinx Tackles the “Diagonal”

Winning at FPGA is a Complex Business

Creating a successful new family of programmable devices is both easy and fun! You just guess which stuff to put on the chip, spend a few years and a couple hundred million dollars designing and building it, put all the software in place so people can design with it, and then, finally, push it out to the world and see if your “guess” turned out right. If it did? Yay! You win. If not, well, a career change is probably in order.

Xilinx has a track record of consistently creating the “right product” in programmable logic. That fact, more than execution or technological innovations, has led the company to a strong #1 position in the industry for the past many years. Fueled by an intense rivalry with Altera, Xilinx has done an admirable job of the incredibly tricky balancing act of deciding what does – and doesn’t – go on the chip and in the supporting tools. Now, however, with FPGAs and programmable logic devices poised to break into vast new markets, will the company keep its mojo? And if so, how? 

I’m going to come right out and say that programmable logic devices such as FPGAs and MPSOCs are probably the most difficult semiconductors to get right from a product-definition perspective. How difficult? Here’s the deal. If you think of a continuum from application-specific standard parts (ASSPs) to completely generic components such as processors, modern programmable logic devices fall somewhere in the middle.

If you’re in charge of product definition for an ASSP, your job is fairly well defined. You gather the specific requirements for your application – what standards, sizes, speeds and power consumption are demanded by your application, and you’re done. If engineering builds to your requirements, you’re likely to hit very near the bullseye. A similar thing is true of a processor. Chances are, you already have the instruction set defined, and the rest falls pretty neatly into place.

In programmable logic, your job is way, way more complicated. The programmable fabric is the easiest part – and even that isn’t easy. You decide how many LUTs you probably need, and then you have to decide what routing resources are required to balance that number of LUTs, allowing high utilization without scarfing up too much silicon area. This is a complex problem, usually accomplished by running hundreds of real world designs through your tool flow and tweaking the architecture for optimum routability and performance. But FPGA designers have been working on this problem for the past three decades, so they’re starting to get it pretty well dialed in.

Then there’s the IO. This is where the rubber starts to meet the road on application specificity. Different amounts and configurations of IO are required to meet the needs of varying applications. And different amounts of IO are needed to balance the geometry of the amount of core logic on the chip so as to utilize the silicon optimally. In order to get this one right, you need to survey the major applications that will use each device and start your planning from there. Yes, it’s still basically rocket science, but people have been launching rockets for decades. It’s doable.

The tricky part – is everything else. A modern programmable logic device has a wealth of resources that are not LUT fabric and IO, and the amount and type of each resource crosses over into the domain of magic marketing. These include things like memory of various sizes, speeds, and types; hardened DSP and arithmetic blocks; processing subsystems – including application, real-time, graphics, and microcontrollers and associated peripherals. Then, we get into hardened IO blocks that specifically support various standards – flavors of DDR, PCIe, Ethernet – you get the picture. Oh – and these days, there may also be analog blocks, various types of crypto and security features, and other specialized goodies that need to be designed into the chip rather than left to the “soft” domain of programmable logic IP.

Then, you get into even more complicated areas like – how much should be done on a monolithic die versus being integrated through a 2.5D packaging technique such as a silicon interposer? This question alone brings a host of issues – from effective yield to cost to reliability to thermal concerns to advantages from having a single package with die from varying processes to… oh yeah, how many product variants can you mix and match at the packaging level without having to tape-out a new chunk of silicon?

All of this is just for the silicon part of the product. And, in reality, a programmable logic product family probably succeeds or fails more on the strength of the tools and IP that accompany it than by what’s on the silicon itself. The entire menagerie of silicon, software, IP, and support services must dance a well-choreographed, synchronized saga that rings true with the intended audience of each targeted application area.

Now, with programmable devices aiming to score sockets that go far beyond the traditional boundaries of the FPGA market, there are vast new application domains to consider. Data center has a completely different set of requirements from automotive, which in turn is a whole different world from IoT edge devices or network infrastructure. Each application domain has its own specific requirements and speaks its own native language. 

How is all this managed in a real, successful programmable logic company? We sat down and chatted with Steve Glaser, senior vice president, corporate strategy and marketing group at Xilinx, to shed some light on the model Xilinx uses to come up with their particularly successful line of “All Programmable” offerings. Glaser talks about what he calls the “Diagonal,” which is a matrix that combines the requirements of each targeted application domain on one axis, crossing the technical requirements for each product and service on another axis. 

In this scenario, a team may be responsible for understanding the needs of, say, the automotive market. That team talks with engineers from the automotive industry and understands the particular qualifications, standards, interfaces, constraints, and even terminology used by the designers in that sector. These people need to be experts in programmable logic, but they need to avoid speaking “Programmable-logic-ese.” Instead of discussing LUT counts, bitstreams, synthesis, place-and-route, and timing violations, they need to talk in terms like CAN bus, long product life cycles, SAE standards, ADAS, and infotainment.

The work of these teams should boil down to a set of requirements driving the creation of the core programmable logic devices. Automotive designers must have this, this, and this. They must NOT have that or that. It would be great if we could offer them this other thing.

The third axis of the diagonal is populated by folks who focus on specific areas of the base technology. If I am working on a design for DSP blocks, for example, I may be getting requirements from many different application areas. The goal is to find something akin to a least-common-denominator – a perfectly designed compromise that meets at least 80%-90% of the needs of all of the targeted application segments, without any disqualifying features that would make it unusable by any of them. 

This matrix-of-marketers arrangement works well for creating an offering that plays nicely in the various arenas where the company is trying to peddle its wares. But there is a new challenge emerging that Xilinx is also addressing. As FPGAs expand beyond the traditional FPGA-friendly markets, a “one-size-fits-all” design flow is no longer acceptable. There simply are not enough experts in RTL-based design floating around in all the industries that are adopting FPGAs to keep up with demand. And there is no single native language that is understood across all of the targeted industries.

Xilinx’s solution to this challenge has been to create multiple front-ends for the design flow. A front-end could encompass a design creation language appropriate for the market, IP and reference designs that address a large part of the domain-specific needs, reference designs for common applications in that market, and specially designed development kits appropriate for developing the targeted types of applications. 

We have seen this strategy from Xilinx in their “SDx” (Software-Defined “whatever”) announcements. The goal of the SDx environments is to allow “programmers” with little or no FPGA experience/expertise to be successful integrating Xilinx devices into their designs. In the best case, the SDx environment gets the engineer 80-90% of the way to their application goal quickly – perhaps right out of the box with a reference design and appropriate development kit. Then, the customization process is facilitated by a programming model that will feel native and familiar to the engineer (rather than forcing everyone into traditional RTL-based design).

To date, the company has released SDAccel – to help with using programmable logic devices as compute accelerators in applications such as data centers, SDNet – to facilitate the creation of software-defined networking, and SDSoC to provide a familiar, C/C++ programming environment for users of the company’s Zynq SoCs and MPSoCs. Each of these targets application domains by engaging the engineer in their own tribal language, rather than trying to bring a single FPGA design language to the entire world. 

It will be interesting to see how Xilinx’s strategy plays out in the long term. Clearly, there is traction to be gained by producing devices that cleanly address key needs in many application sectors. Obviously, interacting with the engineers in each of those sectors on their own terms will enable and accelerate the penetration of programmable logic technology into new applications. The real key to success will be the long-term evolution of an ecosystem of third-party companies supporting the process. There is far too much breadth in today’s market for a single company to go it alone producing end-to-end solutions for all of them. Xilinx’s ability to nurture third parties to long-term success will be pivotal.


Leave a Reply

featured blogs
Jul 12, 2024
I'm having olfactory flashbacks to the strangely satisfying scents found in machine shops. I love the smell of hot oil in the morning....

featured video

Larsen & Toubro Builds Data Centers with Effective Cooling Using Cadence Reality DC Design

Sponsored by Cadence Design Systems

Larsen & Toubro built the world’s largest FIFA stadium in Qatar, the world’s tallest statue, and one of the world’s most sophisticated cricket stadiums. Their latest business venture? Designing data centers. Since IT equipment in data centers generates a lot of heat, it’s important to have an efficient and effective cooling system. Learn why, Larsen & Toubro use Cadence Reality DC Design Software for simulation and analysis of the cooling system.

Click here for more information about Cadence Multiphysics System Analysis

featured paper

DNA of a Modern Mid-Range FPGA

Sponsored by Intel

While it is tempting to classify FPGAs simply based on logic capacity, modern FPGAs are alterable systems on chips with a wide variety of features and resources. In this blog we look closer at requirements of the mid-range segment of the FPGA industry.

Click here to read DNA of a Modern Mid-Range FPGA - Intel Community

featured chalk talk

PIC® and AVR® Microcontrollers Enable Low-Power Applications
Sponsored by Mouser Electronics and Microchip
In this episode of Chalk Talk, Amelia Dalton and Marc McComb from Microchip explore how Microchip’s PIC® and AVR® MCUs are a game changer when it comes to low power embedded designs. They investigate the benefits that the flexible signal routing, core independent peripherals, and Analog Peripheral Manager (APM) bring to modern embedded designs and how these microcontroller families can help you avoid a variety of pitfalls in your next design.
Jan 15, 2024