feature article archive
Subscribe Now

Need More Performance?

Extracting higher performance from today’s FPGA-based systems involves much more than just cranking up the clock rate. Typically, one must achieve a delicate balance between a complex set of performance requirements – I/O bandwidth, fabric logic, memory bandwidth, DSP and/or embedded processing performance – and critical constraints such as power restrictions, signal integrity and cost budgets. Moore’s Law notwithstanding, to maximize performance while maintaining this balance, the FPGA designer must look beyond the clock frequency altogether.

Overcoming Performance Bottlenecks

Each new generation of process technology brings with … Read More → "Need More Performance?"

Looking Inside

As FPGAs grow faster and more powerful, our natural inclination is to scrape more and more functionality off our boards and cram it into our new, bigger FPGAs. It’s a strategy that makes good sense. Not only do we save board real estate, increase reliability, and cut bill of materials (BOM) cost, but we also usually improve our performance and, paradoxically, reduce our FPGA’s I/O requirements. In addition, we put more of our circuit into the “soft” arena, allowing future upgrades, patches, and variants to be made with only an FPGA bitstream … Read More → "Looking Inside"

Supercomputing To Go

Some embedded applications are much tougher, however. There are cases when we need to deliver copious amounts of computing power while remaining off the grid. Last week, at Supercomputing 2005 in Seattle, there was ample evidence of just such compute power gone mad. Gigantic racks of powerful processors pumped piles of data through blazing fast networks and onto enormous storage farms. The feel of the place was about as far from “embedded” as you can get, unless your idea of embedding somehow involves giant air-conditioners and 3-phase power.

Behind the huge storage clouds, teraflop racks, and … Read More → "Supercomputing To Go"

Saving Supercomputing with FPGAs

Massive racks of parallel processing Pentiums, Opterons, and Itaniums wasted watts at an unprecedented pace last week on the show floor at Supercomputing 2005 in Seattle. Teraflops, terabytes, and terrifying network bandwidths bombarded booth attendees looking for the last word in maximizing computational throughput. Convention center air conditioning worked overtime purging the byproducts of billions of bit manipulations per second as breaker boxes burst at the seams, straining to deliver adequate amperage to simultaneously power and cool what was probably the world’s largest temporary installation of high-performance computing equipment.

Meanwhile, the term “FPGA& … Read More → "Saving Supercomputing with FPGAs"

Changing Waves

For over four decades, progress in processing power has ridden the crest of a massive breaker called Moore’s Law. We needed only to position our processing boards along the line of travel at the right time, and our software was continuously accelerated with almost no additional intervention from us. In fact, from a performance perspective, software technology itself has often gone backward – squandering seemingly abundant processing resources in exchange for faster development times and higher levels of programming abstraction.

Today, however, Moore’s Law may be finally washing up. Even though physics may … Read More → "Changing Waves"

Assemble All Ye IP

There are two levels of DSP design. First, there’s the conceptual level, where hard-core algorithm development rules the day. Your big concern here is the numerical correctness of your algorithm, but there’s no timing information or data typing to fret about. This is the comfort zone for the traditional DSP designer. You’re dealing with a problem from a purely mathematical point of view, using a procedural language like “M” in the MathWorks’ MATLAB, which is suited for un-timed algorithms with mathematically friendly data types to fine-tune your formula.

Read More → "Assemble All Ye IP"

The Case for Hardware/Software Co-Verification

Large devices allow you to stuff a whole system into the FPGA, but debugging these complex systems with limited visibility – and a one-day turnaround for synthesis plus place and route – can consume weeks of your precious time.

Hardware/software co-verification has been successfully applied to complex ASIC designs for years. Now available to FPGA designers, this technology brings together the debug productivity of both a logic simulator and a software debugger. Co-verification enables you to remove synthesis and place and route from the design iteration loop, while yielding performance gains 1,000 times faster than logic simulation.</ … Read More → "The Case for Hardware/Software Co-Verification"

Chillin’ with QuickLogic

Deep in the system designer’s psyche, the traditional truths of FPGA are fused with non-volatile, metal-to-metal connections. FPGAs are expensive. FPGAs consume too much power. FPGAs and battery-powered consumer devices are complete non-starters.

QuickLogic should guard their secret carefully – the one about their new PolarPro being an FPGA family. When designers of portable media players are looking for a device that can significantly increase the battery life of their next-generation units, FPGAs will likely be the last place they think to look. After all, FPGAs burn power like toasters. FPGAs are expensive. Nobody in his … Read More → "Chillin’ with QuickLogic"

The Case for Hardware/Software Co-Verification

Because development boards are readily available, many FPGA designers make the mistake of relying on them as their primary embedded processor debug and verification environment. Can you get the job done that way? Well, yes you can, but then you can also dig a trench with a teaspoon – if you have enough time.

Large devices allow you to stuff a whole system into the FPGA, but debugging these complex systems with limited visibility – and a one-day turnaround for synthesis plus place and route – can consume weeks of your precious time.

Hardware/software … Read More → "The Case for Hardware/Software Co-Verification"

featured blogs
Apr 19, 2024
Data type conversion is a crucial aspect of programming that helps you handle data across different data types seamlessly. The SKILL language supports several data types, including integer and floating-point numbers, character strings, arrays, and a highly flexible linked lis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....