Staying Ahead of the Curve

Synopsys Upgrades HAPS

by Kevin Morris

Verification and test have always faced a tricky paradox: How do you build equipment to test and verify the biggest, fastest devices ever created?

After all, it stands to reason that the tester has to be faster than the thing it’s testing, and the prototype has to be bigger than the thing it’s prototyping. It means that those folks have to always be running ahead of the fastest runners in order to handle the problem.

When prototyping large SoC designs, this issue has always been handled by throwing a wall of FPGAs at the problem. Even though this poses significant challenges with issues like design partitioning and mapping the design to an FPGA-friendly format, it has been the most effective method available for getting a usable prototype up and working.

 

So Long and Thanks For All The Glue Logic

by Dick Selwood

25 years ago FPGAs were the latest and greatest new thing. They were, by replacing glue logic, going to speed up the design of systems, simplify Bills of Material and generally make life easier. Actel ran an ad with the headline “Idea at Breakfast – In Production by Dinner.” Over time, FPGAs have got bigger, faster, and more complex to design. And while they have not replaced ASICs and SoCs, something their advocates were predicting a few years ago, the numbers of ASIC and SoC design starts is certainly not growing at anywhere near the rate of FPGA design starts. Design systems have evolved into tool chains: instead of designing with schematics and dragging and dropping macros of a few gates each, FPGAs now need complex design systems with RTL – just like real ASICs and SoCs. And it is the time and complexity of design that is a potential Achilles heel for future FPGA growth.

But what are the alternatives to FPGAs? I think there are at least two: Xilinx’s Zynq (and the forthcoming Altera alternative) and the xCORE from XMOS.

 

Does “Open” Foster Innovation?

by Bryon Moyer

TSMC held their Open Innovation Platform (OIP) event not long ago. One of the keynote speakers was ARM’s Simon Segars, and he spoke about the benefits of openness, starting with the contrast between how closed the PC market is and how open the phone market has been.

[sound of needle ripping across vinyl]

Whoa, whoa, whoa… let’s play that one back, more slowly.

He showed a picture of a standard desktop PC box as an example of an extremely closed system and then a slide with all of the different phones on it as an example of openness.

 

The Future is Clear (ish)

Xilinx Discusses 20nm

by Kevin Morris

The two big FPGA companies want to be sure that you know they’re ahead.

They always have. It isn’t because you really needed to know, or because one or the other of them being ahead at any given time had any long-term industry-shaping ramifications. It’s just that this myopic, tit-for-tat, red vs blue, Hatfield and McCoy, be-the-first-to-blink behavior is, according to recent economic research, the optimal solution for members of a symmetric pre-emptive duopoly.

Or, maybe both sides just really hate those other guys.

A few weeks ago, Altera announced their vision for FPGA technology on the upcoming 20nm node.  Now, it’s Xilinx’s turn. Does this mean that Altera is 2 months ahead of Xilinx in the all-important “next process node”? 

 

The Path to Acceleration

Altera Bets on OpenCL

by Kevin Morris

Every hardware designer knows that a von Neumann machine (a traditional processor) is a model of computational inefficiency. The primary design goal of the architecture is flexibility, with performance and power efficiency compromised in as an afterthought. Every calculation requires fetching instructions and shuttling data back and forth between registers and memory in a sequential Rube-Goldbergian fashion. There is absolutely nothing efficient about it.

Dataflow machines (and their relatives), however, are the polar opposite. With data streaming directly into and through parallel computational elements - all pre-arranged according to the algorithm being performed - calculations are done just about as quickly and efficiently as possible. Custom designed hardware like this can perform calculations orders of magnitude faster and more power-efficiently than von Neumann processors.

 

14 Nanometers and Counting

by Amelia Dalton

The next process node is coming faster and faster with every passing press release. This week we’re taking a closer look at the brand new 14nm test chip rolled out by Cadence, ARM, and IBM, and we’re looking into the new nanotube memory technology being developed by IMEC and Nantero. Speaking of breaking new ground, my guest this week is Brad Quinton (Tektronix) and we’re going to chat about the most recent developments in FPGA prototyping, what Brad sees as the biggest problems for FPGA prototyping today, and why embedded instrumentation can be more effective than physical instruments.

 

Tektronix Shakes Up Prototyping

Embedded Instrumentation Boosts Boards to Emulator Status

by Kevin Morris

FPGAs are clearly the go-to technology for prototyping large ASIC/SoC designs. Whether you’re custom-designing your own prototype, using an off-the-shelf prototyping board, or plunking down the really big bucks for a full-blown emulator, FPGAs are at the heart of the prototyping system. Their reprogrammability allows you to get hardware-speed performance out of your prototype orders of magnitude faster than simulation-based methods. If you’re trying to verify a complex SoC or write and debug software before the hardware is ready, there is really no option but an FPGA-based hardware prototype.

There are basically two options for FPGA-based prototyping - simple prototyping boards and emulators.

 

From Russia With Love

by Amelia Dalton

Saddle up comrades, we're heading over to the mighty land of Russia. In our first story, we're delving into the sci-fi-esque details of the "2045 Initiative", examining how they plan on achieving human immortality by 2045, and investigating the inner workings of their first android prototype "Alissa". Then, in another Russia-related story, we look into a new software-for-chartity fundraiser launched by Excelsior Software and tell you how you can participate in this program. Also this week, I interview Rob Frissel (Atmel) about the most recent advances in touchscreen technology for laptops and notebooks, how LCD noise comes into play, and how soon we can use these new technological advances in our own designs.

 

The Whole Wide World

Opal Kelly Connects FPGAs to USB 3.0

by Kevin Morris

We often talk about your FPGA projects in these pages as if they were your whole universe. We know they’re not. Most often, your FPGA project is a small part of a bigger task, and the FPGA is acting as a sub-part - ranging from the glue that sticks incompatible parts together to the system-on-chip at the core of your system. Since FPGAs are used in many small-volume and prototyping projects, we often use pre-made modules or even development boards for the FPGA portion of our design. That way, we don’t have the huge additional task of designing our own PCB.

In those types of projects, we’re often putting a big chunk of our functionality on a regular-old PC. It makes sense. There’s no point in designing custom hardware to do something that we can accomplish with a little bit of code on our laptop - and these days our laptop’s capabilities are impressive.

 

Analog-to-Digital-to-FPGA Gets a Boost

Analog Devices Supports JEDEC JESD204B

by Kevin Morris

We’ve yammered on a lot in these pages about how these newfangled FPGA whipper-snapper chips are neater’n dirt when it comes to crankin’ out a whole mess-o lickety-split figgerin’ fastern’ you can say “Bob’s yer Uncle.” Yep, if you got something like that whatcha call digital signal processin’, they got them some-o them there DSP blocks that can do yer times-es, your gozeintas, yer take-aways, and yer summin’. You just pile up the data and pump it in, and the FPGA will do the figgerin’ fastern’ cuzin Winki can go through a stack-o flapjacks.

The problem, of course, with “cuzin Winki” eating “flapjacks” is that somebody has to prepare and serve them - and they need to be going at least as fast as “cuzin WInki” can eat. Before an FPGA can really shine on applications like signal processing, you have to be able to gather data (which is probably analog), convert it accurately to the digital domain, and somehow get it into your FPGA at a speed worthy of the FPGA’s considerable computational abilities.

subscribe to our fpga newsletter


Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register