It’s just semantics, right?
With the fast-paced evolution of electronic engineering, it’s difficult to maintain a context, a sense of perspective, a mental tourist map of the technological universe with a big red “You Are Here” star that helps us understand how everything relates to everything else. Humans have an instinctive need for situational awareness, and we crave some lynchpins to which we can make fast our psychological ships – preventing them from drifting aimlessly into the gray chaotic expanse of static noise.
One of the tricks that helps us keep our frame of reference is labeling. These devices are “MCUs,” these are “CPUs,” these are “Memories,” and these are “FPGAs.” Once we have a label for a thing, we can stereotype and generalize, abstracting away the specific details and gaining a higher sense of our environment – like climbing a tower to survey the surrounding terrain.
As we have traversed the five decades of exponential Moore’s Law madness, we have had to create new labels, leave some old labels behind, and morph others into things that better map to the newer sense of reality. The “hex inverter” and “quad nand” have long since faded into obscurity, and the heavily overloaded “SoC” has emerged to describe just about anything more complicated than a multiplier.
But labels can be oppressors as well.
When a startup company establishes its identity with a new and novel technology, that helps immensely with the building of a brand. Google was a “search” company. Intel was a “processor” company, and Xilinx and Altera have always been “FPGA” companies. But a company whose identity is connected to a single technology for a span of decades faces a challenge as that technology evolves and changes. If the technology with which your company is identified becomes obsolete or evolves into something new, your identity and relevance can be swept away along with it.
FPGAs were originally “glue” logic devices, added into systems at the last minute by engineers who needed that one last thing to make the thingamajig talk to the whatchamacallit. For designers two decades ago, FPGAs were digital duct tape – components you quietly threw in at the last minute to patch up those one or two things you forgot about when you did your initial design.
Then, FPGAs found a higher purpose. The folks who design the network switches that power things like the internet and the telephone networks figured out that FPGAs were key enablers – the stars of the show – when it came to routing packets from point A to point B. They built boxes with hundreds of FPGAs, and their appetite for bigger, faster devices was insatiable. Cost was almost not a factor as FPGAs brought enormous values to systems that sold for top dollar as fast as suppliers could crank them out. Margins went off the charts, and “FPGA” morphed from a type of device to a “Market.” As a result, Xilinx and Altera became multi-billion dollar enterprises in the slipstream of the Big Bandwidth Build-Out.
The next two decades saw the development and increasing maturity of the FPGA duopoly. Xilinx and Altera – firmly established as the two dominant leaders in “FPGAs” – battled for each individual point of market share in the networking business. It was a gold rush worthy of Silicon Valley – and the rivalry it bred has consistently been one of the most exciting in the entirety of the technology world.
But everyone knew that network infrastructure was not a bottomless well. FPGAs needed to diversify in order to survive and thrive. Recognizing this, both companies struggled to find new markets for their devices – striving to re-create the glory days of exponential growth that can come only from red-hot emerging applications. Their efforts to establish new beachheads for FPGAs met with moderate success. FPGA technology found loving and lasting homes in industrial, automotive, aerospace, and even consumer sockets. But none of these moved the meter when it came to supplanting networking as the dominant and driving application for programmable logic.
Now, the curse of maturity has set in. Investors have been lured away by bright shiny objects like social media, and semiconductor companies in general have lost their luster in the financial markets. FPGA companies might as well be walking around with big “Ignore Me” labels on their foreheads, and, in the unforgiving world of public corporations, that is not a healthy thing.
For a few years now, Xilinx has been whispering loudly that they wanted to change the status quo. Recognizing that being recognized only as an “FPGA” company was a major limitation, the company started hinting that it aspired to something larger, something more general and future-proof. When Xilinx launched Zynq (a family of devices that combined FPGA fabric and IO with conventional ARM-based processing subsystems), the “F-acronym” was nowhere to be seen. Even though it took Xilinx a while to figure out what to call the things, they were very careful never to use the term “FPGA” in conjunction with Zynq.
At about the same time, Xilinx coined the term “All Programmable” and began steadily and quietly stripping “FPGA” from their high-level marketing materials. The message was subtle but clear: the company wanted to be more than just an “FPGA” company. It wanted to evolve, establishing itself in a more favorable position for the future. But this had to be done with utmost care. When you’re the #1 FPGA company in the world, you don’t want to come right out and say “Hey World! We’re no longer an FPGA company.”
It’s just semantics, right?
Recently, I met with Steve Glaser, Senior VP of Corporate Strategy and Marketing, to discuss the company’s latest strategy and positioning. The new message is stronger than ever. Xilinx has a vision, and that vision is bigger than “FPGAs.” The company is struggling to break free from the bonds of its past, setting a course that it hopes will keep it ahead of the treacherous curve of technological progress.
The key element, of course, is programmability. It makes sense. With the cost of developing new chips growing rapidly, the idea of building custom, application-specific chips becomes untenable for all but the very highest-volume applications. Those who cannot afford to build custom chips must build their systems from standard parts. And, with the continuing trend of integration, standard parts must become more capable and more versatile in order to compete. That means programmability. Programmability is the reason that off-the-shelf SoCs have found homes in just about everything that uses electricity. Programmability is a panacea, and, with software setting the stage, you can build just about anything you can imagine with an SoC. Just about anything.
But there are some things that SoCs cannot do. When software doesn’t have the performance, power efficiency, or interface capability to do what you need, you’re back to pouring concrete – designing some sort of custom hardware to do the thing that can’t be done in software. This has always been the domain of FPGAs. But with the current state of the art in semiconductor technology, there is almost no reason to have a chip that is “just” an FPGA. We can have our SoC and eat it too. The
model and our software can define systems with a wide variety of processors and customized hardware performing a well-choreographed ballet with the optimal hardware/software architecture mix. CPUs, MPUs, GPUs, and FPGA-based accelerators can each take its appropriate share of the application load, and customized interfaces connect to the larger physical world.
We have entered the realm of software-defined everything.
In this world, we have do-everything chips – chips with all the things we need for our application. Xilinx has given an example of what such chips may look like with their recently announced Zynq Ultrascale+, with devices that have quad-core 64-bit CPUs, MCUs, GPUs, and FPGA fabric all on a die, with a vast assortment of peripherals and memory and a wealth of flexible high-speed IO. Clearly, such a chip can do just about anything and everything. All we need to do is program it.
Yep, that’s all we need to do: Program. It.
In the “All Programmable” world of software-defined everything, the real challenge is coming up with the right tools, the right IP blocks, and the right languages to describe our system. It would be nice if our application could be specified in a way that was more or less hardware-agnostic, such that changes in the underlying chip architecture wouldn’t require a re-design of the application, and so that the application can evolve separately from the hardware that implements it.
This is the direction Xilinx is headed with their “SDx” (software-defined-whatever) series. Each new SDx attacks the problem of specifying a different type of system in software, from the venerable networking application to compute acceleration. The idea is to allow the designer to communicate intent in the most natural way possible for the application, and to provide a set of tools and IP that will help to realize that application using one of these do-everything chips in the most efficient way possible.
Interestingly, Xilinx’s goals around this re-positioning are about users rather than revenue – a creative way of measuring diversity of applications. The company wants to “5x potential users in 5 years” – an ambitious goal since it has taken over two decades to amass the company’s current user base. In order to accomplish that, the company has to reach users that are not “FPGA experts.” In fact, they have to reach users that are not even hardware engineers. That’s where the software-defined-everything plan will succeed or fail – with Xilinx’s success in creating a design flow that enables software engineers to take full advantage of the formidable capabilities of this new class of super chips.
This software-defined “All Programmable” vision is what we will all be seeing from Xilinx now. It is a distinct departure from the “30% faster,” “27.6% more LUTs,” FPGA-centric superlative-storm marketing messages of the company’s past. Perhaps it is a sign of an industry and a company that is maturing – boiling itself down to the essence of solving customer problems, rather than hawking the latest Popeil-esque wonder widget. It will be interesting to watch.
Right Idea.
Good idea, except for the natural market problem the competition is Intel, AMD, Motorola, Freescale, Atmel, Microchip, and some two dozen other processor/SoC companies already in that market.
As soon as some major subset of those companies realize that FPGA fabric attached to their cpu/cache/gpu is a necessary market share retention requirement, then Xilinx will find themselves back in the minor leagues.
And the markets are already buzzing about the Intel/FPGA/Altera stories.
Sounds like Xilinx is doing a “me too” after a decade of bashing the “C/software to logic/gates” camp and refusing to open up their tools to allow development of automated “Compile and Go”. It just ain’t going to happen with the Verilog/VHDL to manual placement and timing closure. Xilinx previously told the software guys … “go to *ell” FPGA’s are only for REAL hardware engineers.
@TotallyLost,
I agree on most points. The (major) exception is that (from previous discussions) you and I have a dramatically different view on the complexity of industrial-strength software-to-gates technology. I don’t think any of the companies you list can just throw FPGA fabric onto existing products and get anybody to use it. Look at the example of Intel’s “Stellarton” device that combined an Atom processor with an Altera FPGA in the same package. I’m pretty sure almost nobody used one of those. It wasn’t because the Atom processor wasn’t good, the Altera FPGA wasn’t good, or they weren’t connected well. It was probably because there was no clear tool/support infrastructure to help people design anything useful with the thing.
@TotallyLost,
I agree on most points. The (major) exception is that (from previous discussions) you and I have a dramatically different view on the complexity of industrial-strength software-to-gates technology. I don’t think any of the companies you list can just throw FPGA fabric onto existing products and get anybody to use it. Look at the example of Intel’s “Stellarton” device that combined an Atom processor with an Altera FPGA in the same package. I’m pretty sure almost nobody used one of those. It wasn’t because the Atom processor wasn’t good, the Altera FPGA wasn’t good, or they weren’t connected well. It was probably because there was no clear tool/support infrastructure to help people design anything useful with the thing.
The barrier for entry into the FPGA market has always been tools. If you talk to Altera/Xilin, you discover that they make a massive investment every year in tools. Anyone wanting to “just add some FPGA” to a SoC offering is going to have to develop a tool suite to match the resources that the big boys provide
Actually no need to add an FPGA to a SoC. Put some programmable or configurable IP block along the CPU(s) in the SoC, and you will have a much easier to use, better tested and supported and area/cost/power/performance optimized solution, instead of having to design all the logic yourself with an FPGA along a CPU and take the risk.