feature article
Subscribe Now

Methodology Melting Pot

Blending Design Domains for FPGAs

The first explorers came with Karnaugh maps and truth tables. Complex combinational functions could be concentrated in programmable logic devices more efficiently than with random logic parts or large, sparse ROMs. As these early PAL pioneers blazed trails into a new frontier of logic design, a culture of design methodology grew around them, and the process refined itself with design automation tools and techniques tailored to their needs.

Over time, programming PALs became less and less exclusive. The problems and pitfalls faced by early designers were known and solved, and automation techniques relegated programmable logic design to the newer, less experienced members of the design team. Grouping random and glue logic together and burning a programmable device moved closer and closer to the purview of interns and summer students.

Programmable logic companies then threw a wrench into the gears. As newer, more complex PLDs and CPLDs became available, the role of programmable logic in the design became more important, and the task of programming it became more demanding. Lead engineers and tool developers were once again pulled into the fray. A second generation of designers began implementing more complex functions using PLDs. Sequential design, state diagrams, and clocks became elements of concern in PLD design. Designers began to look for more abstract descriptions such as schematic diagrams and equations to express design specifications.

Again, over time, these processes became routine. PLDs were once again the lowbrow design assignment while the high-end talent was focused on ASIC, analog, and high-speed design tasks.

As FPGAs came onto the scene and quickly exploded in capability and complexity, a third wave of designers entered programmable logic design. These people knew acronyms like VHDL and Verilog (OK, Verilog really isn’t an acronym) and they weren’t afraid to use them. They also attracted the attention of visionaries who brought new EDA technology like synthesis and HDL simulation to bear on the FPGA problem.

Steady progress in FPGA speed, density, IP content and connectivity increasingly attracted the attention of ASIC designers who previously had ignored the potential of programmability. As companies saw the advantages of moving their applications away from ASIC, the ranks of FPGA designers were filled with ex-ASIC luminaries.

It is here that the community began to broaden in multiple directions rather than continuing on its historical linear evolution path. Specialized engineers from many disciplines began to see applicability of FPGAs to their particular problems and to search for design methodologies that would fit. One at a time, their challenges forced changes and additions to the design process and mandated modifications to the core technology.

DSP designers noticed the opportunity for datapaths with a high degree of parallelism. If they were willing to endure the rigors of FPGA design, they could get many times the performance of a DSP processor using a high-performance FPGA. Vendors like Xilinx and Altera responded by including hardwired multipliers and multiply-accumulate (MAC) cells on some of their devices. Along with tool developers like AccelChip, they also began to smooth the path for the predominantly software-biased DSP crowd who were tentatively testing the FPGA design waters. Suddenly the path from MATLAB and Simulink became more important than the ability to support sophisticated HDL constructs.

Vendors like Actel and Xilinx were taking technology in another direction. Producing radiation-tolerant and extended-temperature versions of their devices, they began to cater to the high-reliability and hostile-environment crowd. Space designers in particular were elated to have an alternative to the exorbitant prices of single-digit-quantity ASIC designs. The priorities of these designers were unique, driving FPGA companies in new directions on device packaging, testing, and documentation.

At about the same time, innovative vendors started dropping processor cores (both hard-wired and soft IP) onto some of their FPGAs. This new addition, originally almost an afterthought to give microcontroller flexibility for sequential designs, was a small step for FPGA companies that required a giant leap in methodology. The introduction of processors into the IP mix meant that FPGA designs could now have software content, processor peripherals, busses, and host of new design challenges.

Embedded design with FPGAs required a completely new suite of design tools and methods that did not previously exist in the programmable logic community. Embedded debuggers, hardware/software co-design tools, platform development kits, and specialized IP blocks all had to be deployed in near-panic mode to keep up with the demands of design teams who found themselves baffled by the obstacles blocking their path to embedded platform bliss. Once again, an entirely new set of designers joined the community, this time bantering terms like “RTOS” and “middleware.” This new breed of FPGA designer had a completely different vocabulary, repertoire, and motivation than the programmable logic people of old.

In an effort to expand their markets once again, FPGA companies began to focus on one of their primary weaknesses – device cost. ASICs had always maintained a commanding advantage for high-volume applications because of their huge unit-cost advantage over FPGAs. Launching new lines of cost-optimized devices, FPGA vendors attacked high-volume markets such as consumer electronics, PC peripherals, and automotive head-on. The low-price, high-volume customer wanted it all. They needed absolute minimum cost, fast design cycles, field reprogrammability, and guaranteed delivery capabilities.

The crush of technology across the board was also having another effect. Design domains were becoming increasingly specialized and diversified. The skills and expertise required to be an effective communications infrastructure designer, for example, were moving farther and farther from those demanded of automotive telematics development. From consumer electronics to satellites to storage systems, each design discipline developed its own specialized vocabulary, techniques, tools, and IP libraries. Semiconductor vendors, EDA companies, and IP suppliers have had to race to keep up with demands.

As the cost-per-gate of FPGA technology plummeted and performance and feature-richness steadily climbed, more and more end applications found their way to programmable logic solutions. All of these groups converged in the pool of FPGA and CPLD consumers, and the result is a set of capabilities and methodologies that are unique in the industry. It will become increasingly rare to hear a comparison of the FPGA design process with ASIC, DSP, or PCB-based system implementation. Programmable logic design has come of age and is now standing on its own legs and writing its own new rules. It is taking a unique place as perhaps the most important electronic design methodology of today, and the effects on the new systems and products of tomorrow will be profound indeed.

Who will be the FPGA designers of the future? If today’s demographics are any indication, more often than not they’ll be software-trained systems designers with a working knowledge of hardware concepts, but little direct digital design expertise. Design tools that do the heavy lifting by converting high-level algorithmic and intent-based specifications into optimized implementation architectures will minimize the need for black-belt logic designers. Increasingly effective domain-specific tools and design flows will drop the time-to-market and the training barrier for system designers in a wide variety of application domains. Companies with the vision to recognize the trends early and provide robust solutions serving this process will be well rewarded with positions as the industry leaders of the next generation.

Leave a Reply

featured blogs
Jan 27, 2021
Here at the Cadence Academic Network, it is always important to highlight the great work being done by professors, and academia as a whole. Now that AWR software solutions is a part of Cadence, we... [[ Click on the title to access the full blog on the Cadence Community site...
Jan 27, 2021
Super-size. Add-on. Extra. More. We see terms like these a lot, whether at the drive through or shopping online. There'€™s always something else you can add to your order or put in your cart '€“ and usually at an additional cost. Fairly certain at this point most of us kn...
Jan 27, 2021
Cloud computing security starts at hyperscale data centers; learn how embedded IDE modules protect data across interfaces including PCIe 5.0 and CXL 2.0. The post Keeping Hyperscale Data Centers Safe from Security Threats appeared first on From Silicon To Software....
Jan 25, 2021
In which we meet the Photomath calculator, which works with photos of your equations, and the MyScript calculator, which allows you to draw equations with your finger....

featured paper

Common Design Pitfalls When Designing With Hall 2D Sensors And How To Avoid Them

Sponsored by Texas Instruments

This article discusses three widespread application issues in industrial and automotive end equipment – rotary encoding, in-plane magnetic sensing, and safety-critical – that can be solved more efficiently using devices with new features and higher performance. We will discuss in which end products these applications can be found and also provide a comparison with our traditional digital Hall-effect sensors showing how the new releases complement our existing portfolio.

Click here to download the whitepaper

featured chalk talk

Cutting the AI Power Cord: Technology to Enable True Edge Inference

Sponsored by Mouser Electronics and Maxim Integrated

Artificial intelligence and machine learning are exciting buzzwords in the world of electronic engineering today. But in order for artificial intelligence or machine learning to get into mainstream edge devices, we need to enable true edge inference. In this episode of Chalk Talk, Amelia Dalton chats with Kris Ardis from Maxim Integrated about the MAX78000 family of microcontrollers and how this new microcontroller family can help solve our AI inference challenges with low power, low latency, and a built-in neural network accelerator. 

Click here for more information about Maxim Integrated MAX78000 Ultra-Low-Power Arm Cortex-M4 Processor