feature article
Subscribe Now

Moore for the Masses

Programmable Logic in Consumer Products?

Programmable logic devices such as FPGAs are bigger beneficiaries of Moore’s law than perhaps any other class of semiconductor device.  One could, of course, argue that memories deserve that title.  However, memories are at the opposite end of the spectrum from FPGAs on sustainable price margins – with memories being far on the commodity side, and FPGAs carrying extraordinary margins due to the vendors’ deadlock on tool, IP, and design technology.

Looking deeper at these two technologies, it is interesting to see that memories have literally poured into the vast ocean of consumer devices, while FPGAs have not.  With all the power that FPGAs bring to problem solving in electronic design, it is interesting to analyze the role of this important technology in the highest-volume applications on earth – the ones that all of us use in our every day lives.

This week, at the Consumer Electronics Show (CES) in Las Vegas, thousands of exhibitors will show off their newest engineering creations.  If one looks at the spectrum of consumer devices, one sees an interesting divide.  There are essentially two separate and distinct tiers of products – technologically speaking.  One, equipped with bleeding-edge technology and made public by massive engineering investment and enormous volumes, is represented by products like Apple’s iPhone and iPad.  The second category is represented by devices using older-generation technologies and off-the-shelf components, stitched together in new and creative ways to realize new features and capabilities not found in competitive products. 

At CES, curiously billed as the “Super Bowl of Electronics”, we find a very non-Super Bowl phenomenon.  That is, we don’t see the biggest, most powerful teams showing up to play.  Apple, for example, the single company that has arguably innovated the most in consumer electronics over the past decade, doesn’t have a presence.  What we see instead are a plethora of the “other guys” – the companies who use commodity technologies in an attempt to duplicate and/or +1 the products created by the bona-fide big guys. 

It is these sorts of applications where FPGA should most definitely shine.  FPGAs have the ability to bridge those two off-the-shelf devices that wouldn’t talk to each other, or to add that one extra hardware feature that wasn’t available, or to integrate those last several bits of junk that were cluttering up the board and running up the BOM.

So, why haven’t FPGAs taken off in consumer?

Sure, one can find the likes of Xilinx in the back room at CES – giving demos and briefings on the usual FPGA-related applications.  But even at CES, you’ll find most of the FPGA demos are on bigger-iron things like automotive telematics or the well-paved tried-and-true display management functions.  Out on the big show floor, the presence of programmable logic is hard to detect. 

Some of the most interesting inroads into the consumer market are not from the biggest two FPGA companies at all.  As we’ve discussed before, companies like QuickLogic and SiliconBlue (recently acquired by Lattice Semiconductor) have been quietly making their living selling huge volumes of very cheap chips into consumer applications like personal media players, tablets, smartphones, digital cameras, GPS units, camcorders, and other consumer candy.  These companies have focused their energy on developing devices that bring the power and flexibility of programmable logic to the battery-powered world.  At the same time, they have steered just clear of FPGA behemoths like Xilinx and Altera – whose low-cost devices are still one notch up the scale on power, unit cost, and overall system integration cost. 

The obvious reason that higher-end FPGAs don’t end up in many consumer-oriented devices is cost.  The chips are just too expensive.  Conventional wisdom is that the logic inefficiency of programmable devices compared with full-custom chips translates into greater silicon area and, therefore, greater cost.  However, this argument doesn’t really stand the smell test upon closer examination.  The majority of the high cost of high-end FPGAs is margin.  FPGA companies simply extract more money for each acre of silicon they produce.  Don’t assume from this that the FPGA companies are fleecing us – rolling around in giant piles of cash while we struggle to get our designs to market.  They obviously are not.  The extra margin in FPGA silicon goes largely to cover the extraordinary cost of supporting design with these devices – with tools, IP, and services. 

Much has been written about all the wonderful things that FPGA companies include “for free.”  We get free (or very dramatically reduced-cost) design tools, free IP, free help from AEs, free reference designs, subsidized design kits, fantastic service – and the price of all these “free” things is nicely amortized into the difference between what we pay for a chip and what the FPGA company pays to get that chunk of sand etched with their magic inscription.

FPGA companies grew up supporting massive numbers of low-volume designs, and it is very difficult to spin that infrastructure on its head to create a package that is suitable for small numbers of high-volume designs.  Neither of the big FPGA companies wants to be the first to “blink” and compromise the margins and the tool and service expectations they’ve set with their existing customers through the decades, so they leave the “low-end” of the market to niche players.

It is possible that new classes of devices like Altera’s new SoC FPGAs and Xilinx’s new Zynq may break this mold.  Perhaps these chips will end up in more mass-market consumer-class systems, and we’ll see an edification of the programmable-free masses that will bring the kind of thinking required for design with programmable hardware into CES-class products.  If so, it is possible that non-SoC FPGAs might become just a sideshow – relegated to the same classes of special-purpose applications where they started.

For now, however, programmable vendors are still trying tentatively to prove themselves outside their normal sandbox.  Time and time again they go back to reinforce their grip on their loyal base of core customers in industries like communications infrastructure, while continuing to dip their collective toes in various other markets like consumer electronics – just waiting to see if the water is warmer yet.

7 thoughts on “Moore for the Masses”

  1. Nope – No chance! Look at what you can get from TI for $25 in their “old” OMAP/DaVinci line: An A8 core AND a DSP – or how about the newer “Sitara” line? I use TI as a reference here because they are generally on the “high” end of the price range, compared to say Samsung or Broadcom, thus more analogous to Xilinx/Altera. FPGA “SoC” devices STILL need to be configured, need external DDR DRAM to execute code from AND external boot flash. Oh, and they do NOT have a nice (cheap, easy to produce, simple) solution like POP memory! You just KNOW those first Zynq parts will be >>$100 in low volumes. They’re simply not going to be price competitive with a “real” SoC (NVIDIA Tegra 3 anyone?) coupled to a low-cost (Spartan-6) FPGA – if you REALLY need an FPGA in your CE product. The problem (for FPGA vendors) is that you do NOT need an FPGA in your CE product! The latest gen. of Samsung/NVidia/??? dual A9 core SoC’s have NICE 3D graphics engines AND just about every interface on the planet. If you can’t talk directly to those SoCs (i.e. you think you need an FPGA for interfacing) you: a) are in a niche market with weird interfaces (military, high-end data comm., etc.) or b) you’re system architecture is wrong.

    So Kevin, with your industry contacts, I suspect you probably know (or can find out) what Qty. 1000 of an X7Z010 (or 020) costs today. Care to enlighten us?

    Having worked for a CE device manufacturer looking at using FPGAs, I went through the price negotiation process with company-X. It was about like trying to buy a used car – only WORSE! No thanks!!! I’ll stick with a “real” processor from a “real” processor vendor (with VASTLY greater choice of both price and performance). TI’s new Concerto line of microcontroller+DSP (ARM M3 + ‘C2800) looks AMAZING for $18 in Qty. 10k. Oh, and it really IS an “SoC” because I don’t need external Flash OR DRAM! Beat THAT FPGA vendors…

    Anyway, it looks like Xilinx has FINALLY figured out what to do with all those old Triscend guys (remember them?).

  2. FPGAs for the CE? Is it worth considerable? Not a good choice for handheld devices. All the high end FPGAs are expensive and power hungry. as said by simath, “real” processors are good for CE. FPGAs are not for the consumer choice but the company’s choice. FPGAs have their own space, not a good choice for CE.

  3. Why use expensive SoC-class FPGAs to perform very simple control functions?

    The existing digital design methods are : 1) random logic, 2) state-machines, or 3) Turing-type machines (TMs). There hasn’t been any new method available, so the tried and true TM is the fall-back position, as it provides increased flexibility despite ever-increasing complexity. Software continues to be a problem, especially where electronics meets the real world, as it does in all CE. Just about no one needs to operate a toaster from a foreign city—but if there is a microprocessor aboard, well, why not? A ten-thousand gate solution becomes twenty-thousand or more. The big chip and software companies must love that attitude.

    The problem in using TMs for automation is one of concept. On every design occasion, one must conceive how to take a TM that is especially good at symbol storing and swapping, hence decryption (the original purpose of computing) and make it monitor and control a real-world physical process. Contortions in thought and practise are mandatory! Complexity and mis-application of technology arises from taking an ordinary space-time problem (the CE’s physical process) and generating a state-space solution for it that works. Because TMs and software operate strictly in the space-domain, all temporal information must be first translated into space. The Boolean logic operations are all in the space-domain, so after the answers are obtained, the results must be translated back into the real time-domain for output.

    Why not use just enough logic (of the right kind) to perform the critical tasks?

    The solution is a new method, call it “PDQ,” one in which operation is inherently parallel and has no run-time software, so acts in a safe and immediate manner. Control systems designed and implemented in PDQ have specifications, behavior, and hardware descriptions that are are very similar to each other and all are expressed in a common language (English, at present). PDQ design methods can generate faster, safer, more flexible, and simpler dynamic control systems for vehicles, factory automation, and consumer electronics. (The embedded systems for those uses employ 98% of the estimated ten billion per year of currently-manufactured microprocessors and derivatives.)

    Efficient PDQ will optimally use smaller and simpler types of FPGA, configured more for random logic and less for clock-registered data. Who to approach? Altera and Xylinx do not wish to make “popcorn,” even if programmable, but this innovation may well grow up to assimilate their business, in time. QuickLogic and Lattice? They may not want to field a new product that undercuts existing lines. It might be perfect for a third-world country that wants to break into the biz. Suggestions?

  4. @CharlieM,

    I think having a category called “random logic” is a bit broad. It’s a little like saying there are three kinds of vehicles – horses, airplanes, and stuff with wheels. It skews the picture. There are a lot of distinct architectures and micro architectures that would comprise what you’re calling “random logic” – not really random at all.

    Explain more about how “PDQ” would differ conceptually from current high-level synthesis targeting platforms like FPGAs. The algorithm is described in sequential terms (which is what is usually most natural for people) and the software performs resource allocation and scheduling in order to create a data path/controller/memory architecture to realize that algorithm – including introducing parallelism, pipelining, and resource sharing as appropriate for the design goals.

  5. In the distant past, “random logic” described a hand-crafted solution developed using SSI and MSI chips. Now of course such can be done in “sea-of-gates” FPGAs. It is that hand-crafted method to which I refer.

    Current design practices all presuppose data-processing methods in which sensors are sampled and stored, then state-machine operations or further instructions are performed on the data which ultimately activates the outputs. Logic operations are limited to the space-domain (16 possible combinations of Boolean AND & NOT between two operands, which can be performed via lookup tables in ROM) together with the time-to-space converter STORE. Operations are fundamentally linear-sequential, with resource-sharing.

    PDQ acts more like a stimulus-response biological machine in which the physical process events and conditions are more directly connected to the logic which directly activates the output effectors. Logic operations include those for the space (Boolean), time, and (joint) space-time domains numbering several times that of the conventional list of functions. PDQ is a mostly-hardware solution that can perform time-, safety-, and mission-critical functions for physical processes in comparatively few equivalent gates and no software. Separate hardware resources are designated for each logic function performed, which is how truly parallel-concurrent operation is obtained and sustained.

  6. According to definitions of “random logic” I’ve read online, PDQ belongs to the collection of semiconductor circuit design techniques that translate high-level logic descriptions directly into hardware features such as Boolean gates, temporal function-, and spatio-temporal function-arrangements of gates. The inclusion of these last two categories of functions is that which especially differentiates PDQ from conventional methods.

Leave a Reply

featured blogs
Mar 28, 2024
The difference between Olympic glory and missing out on the podium is often measured in mere fractions of a second, highlighting the pivotal role of timing in sports. But what's the chronometric secret to those photo finishes and record-breaking feats? In this comprehens...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Advancements in Motor Efficiency Enables More Sustainable Manufacturing
Climate change is encouraging the acceleration of sustainable and renewable manufacturing processes and practices and one way we can encourage sustainability in manufacturing is with the use of variable speed drive motor control. In this episode of Chalk Talk, Amelia Dalton chats with Maurizio Gavardoni and Naveen Dhull from Analog Devices about the wide ranging benefits of variable speed motors, the role that current feedback plays in variable speed motor control, and how precision measurement solutions for current feedback can lead to higher motor efficiency, energy saving and enhanced sustainability.
Oct 19, 2023
20,656 views