Ultra-Low Power Strikes Again

by Amelia Dalton

Show me an engineer that isn't worried about power consumption and I'll show you a bridge I'd love to sell you. Getting the power consumption down in your designs can be a tricky dance. From the semiconductor process, to the components you choose, to the embedded software you use, just about every decision you make affects your system power. In this episode of Fish Fry, we're talking to Gordon Hands (Director of Marketing - Lattice Semiconductor). Gordon and I chat about the world's smallest FPGAs and what ultra-low power devices can do. Also this week, I'm talking with Alf Bogen (CMO - Energy Micro) about how Energy Micro is making a name for themselves in the low power business.

 

Mobile Drives Everything

The Quiet Shift in Semiconductor Conductors

by Kevin Morris

Nice job folks! Over the past few decades, we electronic engineers have created social change so dramatic that previous discontinuities like the Renaissance, the Industrial Revolution, and two world wars pale in comparison. Nothing in human history can rival the technological progress that has been achieved in electronics and the impact of that progress on civilized life.

We go to work each day, month, and year - and our baseline assumption is exponential improvement. Think about that a minute. We’ve all taken math (lots of it, in this profession). Exponentials in the real world are never sustainable. Yet, we work away in our jobs expecting the number of transistors on a chip to double every two years, just as sure as we expect the sun to rise in the morning. Biennial doubling of capability is reduced to status quo.

 

The Rise of MathWorks, the Fall of EDA

Two Routes Into FPGA Tools

by Kevin Morris

Since Einstein, we’ve come to realize that more and more things depend on relativity. This is true not just in physics, but also in more human arenas like marketing and sales. Our perception of something like - FPGA tool prices, for example - might depend on whether we’re coming from an EDA background - which says that high-quality design tools cost tens- to hundreds-of-thousands of dollars for a license, or from a mass-market software background - which says that software goes for tens to hundreds of dollars a pop.

Both of these are valid perspectives. EDA companies have to charge what they do to fund the enormous engineering effort required to develop highly-sophisticated tools for a relatively small audience. Developing a good EDA tool is a much more complex undertaking than, say, the latest version of Angry Birds, and the cost of that complexity is amortized over an audience thousands of times smaller. The result - one piece of software might cost $5 and one might cost $500,000.

 

FPGA Tools, Marketing Malarkey Goggles, and More

by Amelia Dalton

FPGA Tools - can't live with them, can't design without them. This week's Fish Fry is about ending the first part of that meme and how Xilinx is hoping to make our design tool experience a whole bunch easier. My guest is Tim Vanevenhoven (Senior Marketing Manager - Xilinx) and we're going to chat about design abstraction, IP integration, and how the FPGA tool community is working together to provide more powerful, easier to use solutions. Also this week, I give a sneak preview of my previously top-secret, special-purpose augmented reality glasses - and I'll tell you how I plan to use them to cut through the marketing fog at the upcoming Design West show. Hmmm... Maybe I should do a Kickstarter project to get these guys into volume production...

 

Three Legged Stool

Process, Tools, and IP

by Kevin Morris

No matter what we’re trying to design these days, we depend on three fairly distinct elements to get our system, circuit, board, or chip working and ready for action. It doesn’t really matter what you’re designing, either. It can be a single IP block, a subsystem-on-a-chip, a whole custom chip, a board, a box with many boards, or a whole complex system made up of many different major components. In each case, we need the supporting process technology, the right tools to do the design work, and the IP that allows us to add new and innovative things to our system without having to redesign the whole thing starting back from primordial goo.

If you design with FPGAs (as many of you do) all three of these elements are available from the vendor. The FPGA companies work hard to provide all of these key elements so their customers will get working chips done faster and get to the point of production-volume purchases where they start to earn their real money.

 

The Bell for Round Two

Xilinx Upgrades Vivado

by Kevin Morris

The big battle in FPGAs has traditionally been fought at the chip level. For years, we have endured press release skirmishes over who had 20% lower power or 10% more LUTs on their devices. FPGA companies’ boom and bust years hinged largely on who got to market first with next-process-node silicon. This Moore’s Law arms race has escalated for over two decades, with staggering costs. Today, if you don’t have a 9-figure sum to invest, you’re not going to have FPGAs on the next process node.

In parallel to the silicon race, however, another war has raged - albeit less visibly. This quiet competition is more likely to determine business success over the coming years than silicon. This battle is over design tools.

 

The Tall Thin Engineer

Standing on the Shoulders of Giants

by Kevin Morris

Engineering is one of the very few professions that constantly re-engineer themselves. By doing our work well, we change forever the nature of the work remaining to be done. Once something has been designed (contrary to the apparent opinions of those with chronic NIH syndrome who insist on perpetually re-designing the wheel), it is designed, and it should not really need to be designed again.

Most engineering school curricula start us at a level of “bare metal.” We first study the basic underlying sciences - physics and chemistry, and the mathematics required to make it all work. The educational philosophy seems to be that we should have a conceptual grasp of the bare metal layer - electrons skipping happily down conductive pathways, frolicking playfully across N- and P- regions, and delivering their cumulative punch right where we need it.

 

Design 100G With Tabula’s ABAX2

Or, Take the Red Pill

by Kevin Morris

Let me tell you why you’re here. You’re here because you know something. What you know you can’t explain; you feel it. You’ve felt it your entire career - that there is something wrong with FPGA design. You don’t know what it is, but it’s there - like a splinter in your mind, driving you mad. It is this feeling that has brought you to this article. Do you know what I’m talking about? Tabula’s Spacetime architecture... do you want to know what it is?...You take the blue pill, design some cool 100Gig gear with Tabula’s ABAX2 devices, and you believe whatever you want to believe. You take the red pill, you stay in Spacetime, and we talk about how deep the rabbit hole goes.

With apologies to “The Matrix,” sometimes a new product comes along that bends the brain a bit, challenges your built-in assumptions, and inspires new ways of thinking about old problems. Or, in the case of Tabula’s new ABAX2 devices - new ways of thinking about new problems, like “How the heck am I gonna get this 100Gbps packet processing application to work within my power and cost budget?”

 

Like Ant Man

Lattice’s Tiny FPGA Packs a Punch

by Kevin Morris

In a comic-book universe crowded with superheroes, Ant Man was a standout. While all the other characters were going bigger, faster, and stronger, Ant Man’s superpower was being small. Being small, it turns out, let him do things that no other crime fighter could accomplish.

Lattice Semiconductor just unveiled the world’s smallest FPGA. In a design domain where bigger and faster rule the day, where bragging rights in the programmable logic paradigm hinge on having the most pins, the biggest package, and the fastest SerDes, Lattice apparently took a page from the Marvel Comics playbook and went all Ant Man on us.

 

HLS versus OpenCL

Xilinx and Altera Square Off on the Future

by Kevin Morris

If you have a visit with Xilinx and Altera these days and ask them about FPGA design methods above and beyond RTL, you’ll get very different answers. Xilinx will tell you they’re having great success with high-level synthesis (HLS). Altera will tell you that OpenCL is the wave of the future. Both sides make compelling arguments, which sound like they have nothing whatsoever in common. What does it all mean?

We all know that RTL design is tedious, complicated, and inefficient. We’ve known it for twenty years, in fact. To paraphrase Winston Churchill: RTL is the worst possible way to design electronics - except for all of the other ways that have been tried. (OK, and we know - Churchill was actually paraphrasing someone else. See? IP re-use works, even in politics!)

subscribe to our fpga newsletter


Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register