Every year, FPGA and Structured ASIC Journal has conducted a survey of design teams that have recently completed projects using FPGAs. We collect and analyze a large volume of responses from readers regarding their completed projects, and we publish and sell a detailed report to companies that have a vested interest in gathering data about the current behaviors of FPGA design teams. This is nothing unusual, as many media companies perform similar research and offer similar studies to their customers. This time, however, we noticed one thing that was unusual. There seems to be a shift that … Read More → "Bundling Performance"
If all of this is seems a bit confusing, you might want to re-read part one of this series – “Tyranny of the Metaphor,” (read) where we discussed the problems with planning software projects using conventional methods like PERT charts and Gantt diagrams. This time, however, we’re going to roll up our sleeves and start solving the problem one piece at a time. As with almost any good therapy, we need to look deep inside ourselves first. As a group, … Read More → "Tyranny Take Two"
In a traditional FPGA design flow, crafting the hardware architecture and writing VHDL or Verilog for RTL synthesis requires considerable effort. The code must follow a synthesis standard, meet timing, implement the interface specification, and function correctly. Given enough time, a design team is capable of meeting all these constraints. However, time is one thing that is always in short supply. Deadlines imposed by time to market pressures often force designers to compromise, resulting in them to settle for ‘good enough’ by re-using blocks and IP that are over designed for their application.
Price, Performance, and Power – the three Ps of Moore’s Law — have fueled four decades of technological fury. Each new process node brought us more gates per square meter of silicon, reducing price. Each shrink of the gate also brought us faster toggle rates, giving higher performance, and each narrowing also gave us the opportunity to operate at lower supply voltages, giving less dynamic power consumption. It seemed as if everything would improve exponentially forever.
Of course, nothing is free. There has always been another exponential curve at work as well – that … Read More → "More and Moore"
Extracting higher performance from today’s FPGA-based systems involves much more than just cranking up the clock rate. Typically, one must achieve a delicate balance between a complex set of performance requirements – I/O bandwidth, fabric logic, memory bandwidth, DSP and/or embedded processing performance – and critical constraints such as power restrictions, signal integrity and cost budgets. Moore’s Law notwithstanding, to maximize performance while maintaining this balance, the FPGA designer must look beyond the clock frequency altogether.
Overcoming Performance Bottlenecks
Each new generation of process technology brings with … Read More → "Need More Performance?"
As FPGAs grow faster and more powerful, our natural inclination is to scrape more and more functionality off our boards and cram it into our new, bigger FPGAs. It’s a strategy that makes good sense. Not only do we save board real estate, increase reliability, and cut bill of materials (BOM) cost, but we also usually improve our performance and, paradoxically, reduce our FPGA’s I/O requirements. In addition, we put more of our circuit into the “soft” arena, allowing future upgrades, patches, and variants to be made with only an FPGA bitstream … Read More → "Looking Inside"
Some embedded applications are much tougher, however. There are cases when we need to deliver copious amounts of computing power while remaining off the grid. Last week, at Supercomputing 2005 in Seattle, there was ample evidence of just such compute power gone mad. Gigantic racks of powerful processors pumped piles of data through blazing fast networks and onto enormous storage farms. The feel of the place was about as far from “embedded” as you can get, unless your idea of embedding somehow involves giant air-conditioners and 3-phase power.
Behind the huge storage clouds, teraflop racks, and … Read More → "Supercomputing To Go"
Massive racks of parallel processing Pentiums, Opterons, and Itaniums wasted watts at an unprecedented pace last week on the show floor at Supercomputing 2005 in Seattle. Teraflops, terabytes, and terrifying network bandwidths bombarded booth attendees looking for the last word in maximizing computational throughput. Convention center air conditioning worked overtime purging the byproducts of billions of bit manipulations per second as breaker boxes burst at the seams, straining to deliver adequate amperage to simultaneously power and cool what was probably the world’s largest temporary installation of high-performance computing equipment.
Meanwhile, the term “FPGA& … Read More → "Saving Supercomputing with FPGAs"
For over four decades, progress in processing power has ridden the crest of a massive breaker called Moore’s Law. We needed only to position our processing boards along the line of travel at the right time, and our software was continuously accelerated with almost no additional intervention from us. In fact, from a performance perspective, software technology itself has often gone backward – squandering seemingly abundant processing resources in exchange for faster development times and higher levels of programming abstraction.
Today, however, Moore’s Law may be finally washing up. Even though physics may … Read More → "Changing Waves"