Why FPGAs Will Win
You probably remember the TV commercials. Two strangers randomly collide - co-mingling their confections in a fictitious fortuitous coincidence - giving the world the magic that is Reese’s Peanut Butter Cups. It’s a lie, but fact emerges from farce - chocolate and peanut butter make a very nice combination.
FPGA fabric and optimized circuit blocks make a very nice combination, too. Why settle for lower performance, lower density, and higher power consumption for the parts of your circuit that do not require the flexibility of FPGA fabric? Why pour concrete and lock down chunks of your design in hard-core cells that are likely to change or require multiple variants?
Several months ago, Cadence announced a strategy called EDA360. It was accompanied by a whitepaper that was eventually made public. It was, and is, billed as the brainchild of Cadence CMO John Bruggeman and was issued as a call to arms – a manifesto, even – to the EDA industry.And it caused a bit of a stir, some of it the kind Cadence would like, some less so.
Well, now that some time has passed to let everyone calm down some, seems like it might be good to come back to this topic in the hope that rational heads can prevail. What does this mean for Cadence and for the industry?
OK, that sounds way too generic. Here’s the real question people ask: is this just marketing hype or is there some substance behind it? The specific reason that question applies is that most examples given by Cadence about how EDA360 gets put into play involve pointing to products that have existed long before EDA360. Which almost makes it look like the strategy is really a repositioning of existing products.
Fish Fry - November 12, 2010
In this week's Fish Fry, Amelia takes on ARM Devcon, the Ultimate EDA tool, Lattice Semiconductor's new MachXO2, 50 caliber flash drives, and more...
You know who you are. You’re one of the legions of ARM programmers, engineers, and developers. You made ARM the most popular 32-bit processor on the planet—eclipsing even Intel. You use an ARM-based cell phone, you listen to your ARM-based iPod, you spin up ARM-based disk drives… admit it. You’re part of the ARM army.
Well, good news, campers. The latest, greatest, fastest, most wonderful-est ARM processor in the world just got announced today. It’s the tippy-top of ARM’s broad family tree, surpassing even the multicore Cortex-A9. Behold the Cortex-A15. Look upon it and be amazed.
Okay, maybe the A15 isn’t that big a deal. Yes, it’s a sophisticated and advanced 32-bit processor design, and it’s clearly the best work that ARM has ever done. But to be honest… it’s a lot like other 32-bit designs from other CPU companies. The big deal is that it’s the most-advanced CPU from ARM. It’s just not the most-advanced CPU ever.
Or How to Make Your Giant SoC Look Like a 286
You take a bit of extra logic, tap into your JTAG infrastructure (pun intended), add some IP, and look into what’s happening with your FPGA. And you’d say, “Oh, that sounds like Altera’s SignalTap or Xilinx’s ChipScope.”
OK, so then say you add some logic to your ASIC, capture and compress a bunch of data, and decompress it on the way out. And you’d say, hey, that sounds sort of like DFT (Design for Test) technology. Sort of. Maybe. (With DFT, usually the stimulus, not the result, comes in compressed and is decompressed on-chip, but it has a similar feel.) Or you say, hey, that sounds like the debug infrastructure that ARM and MIPS provide.
OK, so say you can do both of those things across multiple FPGAs or ASICs. And you do it at the RTL level, pre-synthesis. And, unlike DFT, you can capture not just a single-cycle pass/fail result, but also a history for backtracking how a particular state was reached. And, unlike with the ARM and MIPS debug stuff, you’re debugging not just software, but hardware at any level.
Conventional economic theory has had a pretty tough couple of years. Markets didn’t behave like markets should have behaved. “Irrationality,” in an exuberant guise, toppled, or threatened to topple, some august institutions.
Of course, any time behavior starts to threaten orthodoxy, it’s explained away in some fashion that fits the orthodoxy for as long as possible. During the Great Depression, when contemporary economic theory didn’t allow for the existence of a depression, upon seeing some bedraggled gentlemen selling old fruit by the side of the road as an only means of eking out a bit of coin, Hoover is said to have commented on the vibrancy of the economy that these entrepreneurial fellows proved.
Today the topic is markets. The meaning of free markets, fair markets, whether to ratchet down, regulate, rampage (with some voices shrieking with every possible limit on behavior, “You’re threatening innovation!!!”), and, of course, what the right thing to do is.
Mentor’s New Approach to Keeping the Hot from Getting Hotter
We don’t often stray from the realm of the electric in this space. But our electrical phenomena take place in physical materials, and we house them in other physical materials, and, as we ascend from the domain of the quantum, at some point, the physical features get large enough to where we can not only imagine them, but actually see them.
And, with increasing attempts to use elaborate packaging schemes to achieve more than Moore legislated, we keep getting dragged back out of the electrical and into the mechanical.
We did that earlier this year with a look at a specialized tool for characterizing various standard packaging combinations, but that left open the general case where we’re building custom packaging. And, once you go custom, it more or less doesn’t matter whether you’re packaging an IC, a combination of ICs, mounting packages on a PC board, or housing PC boards on a chassis in a box.
At process nodes below 100 nanometers (nm), achieving yield ramp becomes both more critical and a greater challenge for semiconductor manufacturers. New manufacturing steps, materials and device types, coupled with escalating process variations and a host of other challenges, continually increase the difficulty in device scaling. At the same time, growing market pressures are further squeezing already tight development and delivery windows. In this environment, there’s no room for error in yield learning; devices that don’t yield the first time spell doom for chipmakers, who must get to yield ramp as quickly as possible.
Developing methodologies that provide reliable workflows has helped process and yield engineers break down this challenge into manageable steps – one of the most important being detection of systematic failure mechanisms, such as scratching of wafers during chemical-mechanical planarization (CMP). These macro-scale failures are spatially localized and thus both easy to capture and relatively easy to correct using wafer inspection and statistical process control techniques. However, process steps are expected not merely to build up defect-free devices but to deliver parametric quality so that the devices built on the wafer will function as expected.
Pat Pistilli Shows How Problems are Solved
Anyone who’s gone from California to New York or from New York to California notices pretty quickly that they’re very different places. While California admits all shades of gray, New York is a pretty black-and-white place. It’s either right or wrong. Sensible or ridiculous. It comes from a sense of well-established simple truths. And simple truths lead to simple decisions, unencumbered by process and the win-win feel-good complications so much more evident in the Left Coast.
Sit down for five minutes with Pat Pistilli, and, even if he doesn’t tell you, you will recognize that he’s a product of New York.
Pat has been all over the news lately as recipient of this year’s prestigious Kaufman Award for achievement in EDA. His name is synonymous with the Design Automation Conference (DAC) that he founded and, ultimately, ran.
ADD Triggers a Crash Course on Smart Grids
After a while, all press releases start to look alike. So I’m not sure what it was that caught my eye, but there it was: “ADD Semiconductor is first technology provider to achieve PRIME official certification.” Maybe it was the fact that some company I didn’t know anything about was the first to be certified on a standard I didn’t know anything about, perverse as that might sound.
I was soon to learn that there were many other things I didn’t know anything about.
A quick look showed that this had to do with smart grids and smart meters. Now, smart meters are something of a hot topic in California, where we’re making the transition from the comfortable old rotating-disk meters to something more modern. And more scary. It’s not happening without a fight. Some people fear the wireless signals. Others don’t trust the power company not to cheat on the billing with the new technology. (Some early billing goof-ups didn’t help matters one bit.)
This further piqued my interest: what does it take for a chip to be compliant with a smart-grid standard?
You start off looking for something, and an hour later you have found lots of interesting things but not necessarily the one you wanted. Libraries are great for this, but the web is even better. What brought on this introspection is that I was trying to find something said by Steve Squyres, the lead scientist on the Mars Rover project. When he explains why Spirit and Opportunity have done so much better than the ninety days that was originally planned, he has a phrase about testing. While I couldn’t find it, and neither could I find a copy of his book, which is somewhere in this house, I did find another relevant quotation, “One test result is worth a thousand expert opinions”. In digging through the web, I found that this has been attributed to a range of people, but most authoritatively to Wernher von Braun, the rocket scientist. (Another of his quotes is “I aim for the stars, but sometimes I hit London.”)
The reason for seeking the quotation was that I have been learning more about testing completed silicon. In a nutshell, it is expensive and difficult. And if you are testing mixed signal devices, it is even more expensive and even more difficult.
The Mathworks Simplifies Transfers
A few years ago I took the train from San Diego back home to Sunnyvale. This actually involved three steps: a commuter train to LA, Amtrak to San Jose, and then a commuter train to Sunnyvale.
The train is a fabulous way to travel. However, it is also true that Amtrak is fabulous as long as you don’t have to be anywhere at any particular time. And, true to form, we stayed stationary in San Luis Obispo for a couple hours while they sorted out some crew problem.
I had planned the trip with lots of margin for error. But not quite enough. We arrived into San Jose six minutes after the last commuter train of the day left.
I even asked a conductor ahead of time to see if any arrangements (or acceleration) might be possible to make the connection.
Traditionally, simulation-based dynamic verification techniques — think directed tests, constrained-random simulation and the like — have collectively been the workhorse of functional verification. But it's fair to say that this horse is straining under the load and in need of some relief. The source of this strain: increasing SoC integration and overall complexity. This, in turn, drives up verification costs. Software licenses, hardware servers and staff to set up and run regression environments all cost real money, which of course remains scarce given the tepid economy. Even with unlimited resources, due to that increasing design complexity, more and more code remains effectively beyond the reach of simulation.
Luckily relief is available in the form of various static verification techniques. One such technique is formal verification, a systematic process of ensuring, through exhaustive algorithmic techniques, that a design implementation satisfies initial requirements and specs . Instead of stimulating the design to observe its behavior, formal verification relies on heavy doses of math to analyze all possible executions. Formal verification often works well with assertion-based verification . Once a specific behavior is captured with a property language such as PSL or SVA, formal verification can be used to crawl even into the darkest nooks and crannies of the code.
ModLyng Reinforces an Old Paradigm – With a Twist
We’ve noted here before how old-school analog designers are. While digital designers have moved up in abstraction layer by layer by layer, analog designers still do things the old way. They still draw schematics. They still push polygons. OK, they don’t cut rubylith anymore, but that’s about it.
That’s not to say that they’re stuck in the past; it’s just that no one has really given them a better alternative – or at least not one that’s natural and easy to use. So they’ve remained in their own world with their own tools and their own methods, largely owned by Cadence – conceded by all to be the master of analog.
Actually, no one seems to call it analog. It’s called “custom.” Seems like you could have custom digital circuits as well, but never mind. In this world, “custom” is synonymous with “analog.” Simply put, they’re the only ones that are going to concern themselves with the exact location and shape of each transistor.
“Things would be so much easier if I just knew what you wanted!”
A statement that may strike fear into the heart of many a spouse, this simple plea calls for something other than just charging ahead. It suggests a negotiated path as a better way. It suggests the opening of dialog.
Ironically, as hard as this can be for humans to achieve, it has become more and more common amongst our inanimate brethren. All the way back to the negotiation of baud rate between faxes (the original, “Can you hear me now?”), we have imbued machines with the ability to negotiate the terms of their operation.
Often the benefit of this is backwards compatibility, where newer machines may need to co-operate with older ones, and they have to figure out their highest common denominator. Or, when machines need to work together, but where various features are optional, they may need to declare which of the optional features have been implemented.