feature article
Subscribe Now

FPGAs Supplant Processors and ASICs In Advanced Imaging Applications

Proponents of the Field Programmable Gate Array have fought for years to overcome the “stepping stone” mentality with which the traditionalist engineering community has viewed the FPGA. Used primarily as either an ASIC prototyping platform or as a time-to-market stopgap until the company can produce a processor-based or ASIC-based system, the FPGA has only begun to prove its worth as an end-product solution.

To some extent, the problem has been choosing the right battleground. FPGAs have a very specific set of value propositions which, when taken together, will easily supplant ASICs and processors in the right application. Fundamentally, the FPGA offers fast time to market, low design/manufacturing cost and risk, extremely high processing performance (especially in massively parallel processing applications), and of course, configurability. Coincidentally, these value propositions align perfectly with the requirements posed by a large and growing number of advanced imaging applications. Moreover, the growth of “FPGA Computing” and off-the-shelf products designed for this market make it easier for developers to utilize the technology.

Image processing – the right battlefield!

Image processing applications have traditionally pushed the data processing envelope, both in terms of the amount of data being processed and algorithmic complexity. Advances in image capture technology have recently fueled this requirement. While frame rates and resolutions are on an upward curve, the cost of the technology is falling – a potent combination, resulting in the need to process more pixels in less time.

Rising expectations and requirements from end-user markets are pushing processing technologies even further. As advances in technology move possibilities closer to reality, application developers keep pace by designing ever more complex algorithms. When combined, these three dynamics – amount of data, complexity and demand – place significant pressure on the processing technologies used in advanced imaging applications.

Data processing system designers for advanced imaging applications have traditionally employed ASICs, General Purpose Processors (GPPs), Digital Signal Processors (DSPs), or some combination thereof. However, each of these technologies has limitations and are – for different reasons – failing to meet the needs of application developers today.

The generally cost-effective GPP and DSP technologies have failed to keep up with the demand for processing performance posed by leading-edge image processing applications, ruling them out in many cases. Conversely, ASIC technology can easily meet the performance demands, but not the economic demands. ASICs are fine if you are developing for mass-market applications, but relative device costs, risk and development time are too high for ASICs to succeed in lower-volume and niche applications.

Neither ASICs, GPPs or DSPs can provide the vital combination of technological and commercial viability required by many advanced imaging applications. It is in this “sweet spot” that the very specific set of value propositions offered by FPGAs have already begun to supplant ASICs and processors.

Two such applications where the aforementioned traditional technologies failed to measure up and where FPGAs have become enabling technology are: Unmanned Aerial Vehicles (UAVs) and seismic image processing systems.

UAVs

UAVs are a significant growth area in the defense sector. The military effectiveness UAVs have demonstrated in recent conflicts has highlighted the benefits these vehicles offer. This success, coupled with the emergence of new possibilities for UAV utilization, is vigorously driving technology development for UAVs.

Due to the demanding real-time processing requirements of many of the new UAV applications under consideration, and the extraordinary size, weight and power (SWAP) constraints inherent to UAV design, developers are supplanting conventional processors with compact FPGA-based systems. These ultra-high-performance FPGA solutions deliver tens of Giga FLOPS (floating point operations per second) at a fraction of the SWAP budget required by equivalent systems using conventional processors.

Processing the high-bandwidth, real-time data associated with UAVs would require tens of processors using DSP or GPPs. Moreover, due to the small size of these vehicles, numerous flight and mission systems must compete for space in the extremely restricted UAV payloads and finite on-board electrical power resource. Thus, performance density (Giga FLOPS/inch 2) becomes an extraordinarily important metric by which the FPGA has proven to have a decided advantage over traditional processors.

If the correct algorithms could be implemented using ASIC technology, it could easily satisfy the UAVs data processing and SWAP requirements. However, the majority of UAV applications in the aerospace and military sector are low-volume designs that use specialized algorithms, many of which may are covered by a blanket of security restrictions. Although this decidedly disqualifies ASICs from a commercial perspective, the technological and economic surge in FPGA computing capabilities fits very neatly into this void.

Working with one of their customers in this field, Nallatech successfully ported an imaging application from a GPP-based processing platform to one using FPGAs. The new system delivered 36 Giga FLOPS (36 billion calculations per second) – approximately 60 times the performance level using GPPs. However extraordinary this leap in performance may be, it is the performance density – the combined value of the SWAP and processing performance – that truly distinguishes this achievement. The application was run on a PC/104 stack measuring about 4.5 inches square and 8 inches high (about the size of three Big Macs stacked in their cartons).

Seismic Exploration

The seismic industry is an insatiable consumer of data processing. The quantities of data regularly processed and the complexity of this processing consistently overruns the capabilities of available processing solutions. The typical approach – the seemingly endless expansion of GPP-based cluster computing centers – is proving increasingly untenable as power consumption, heat removal and the need to situate these facilities in remote, harsh locations impede further progress.

A new type of seismic processing platform is required. Here again, ASICs can generally provide the required performance levels, but have been discounted as too expensive and inflexible for this application, whilst DSPs lack the raw processing efficiency. Many in the industry are therefore looking at FPGAs as the best of both worlds.

Initial efforts to port the Kirchhoff Time Migration algorithm – the algorithm responsible for the vast majority of CPU cycles consumed by seismic image processing systems today – to an FPGA-based platform shows tremendous promise. This math-intensive algorithm converts raw recorded seismic time traces into something meaningful with which geologists may work. Besides its inherent complexity, the fact that the algorithm is repeated many billions of times for each dataset makes this application an ideal candidate for the development of a dedicated processing engine hosted on FPGA hardware.

Initial results from an implementation utilizing a Nallatech platform show that a single accelerator card fitted in a single workstation can outperform 50 conventionally clustered workstations, while consuming little more than the power of one. Work is underway to develop three-card workstations that will deliver the performance equivalent to nearly 150 workstations, but the power consumption of less than two.

This vast leap in performance has the capacity to revolutionize the seismic industry, processing more date, with greater accuracy than ever before. More importantly, by eliminating the spiraling power and cooling costs associated with huge cluster computing centers, these powerful FPGA Computing workstations redefine the economics of remote data collection and processing.

Leave a Reply

featured blogs
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
Analog Behavioral Modeling involves creating models that mimic a desired external circuit behavior at a block level rather than simply reproducing individual transistor characteristics. One of the significant benefits of using models is that they reduce the simulation time. V...
Apr 16, 2024
Learn what IR Drop is, explore the chip design tools and techniques involved in power network analysis, and see how it accelerates the IC design flow.The post Leveraging Early Power Network Analysis to Accelerate Chip Design appeared first on Chip Design....

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Shift Left with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens investigate the details of Calibre’s shift-left strategy. They take a closer look at how the tools and techniques in this design tool suite can help reduce signoff iterations and time to tapeout while also increasing design quality.
Nov 27, 2023
19,309 views