feature article
Subscribe Now

Methodology Melting Pot

Blending Design Domains for FPGAs

The first explorers came with Karnaugh maps and truth tables. Complex combinational functions could be concentrated in programmable logic devices more efficiently than with random logic parts or large, sparse ROMs. As these early PAL pioneers blazed trails into a new frontier of logic design, a culture of design methodology grew around them, and the process refined itself with design automation tools and techniques tailored to their needs.

Over time, programming PALs became less and less exclusive. The problems and pitfalls faced by early designers were known and solved, and automation techniques relegated programmable logic design to the newer, less experienced members of the design team. Grouping random and glue logic together and burning a programmable device moved closer and closer to the purview of interns and summer students.

Programmable logic companies then threw a wrench into the gears. As newer, more complex PLDs and CPLDs became available, the role of programmable logic in the design became more important, and the task of programming it became more demanding. Lead engineers and tool developers were once again pulled into the fray. A second generation of designers began implementing more complex functions using PLDs. Sequential design, state diagrams, and clocks became elements of concern in PLD design. Designers began to look for more abstract descriptions such as schematic diagrams and equations to express design specifications.

Again, over time, these processes became routine. PLDs were once again the lowbrow design assignment while the high-end talent was focused on ASIC, analog, and high-speed design tasks.

As FPGAs came onto the scene and quickly exploded in capability and complexity, a third wave of designers entered programmable logic design. These people knew acronyms like VHDL and Verilog (OK, Verilog really isn’t an acronym) and they weren’t afraid to use them. They also attracted the attention of visionaries who brought new EDA technology like synthesis and HDL simulation to bear on the FPGA problem.

Steady progress in FPGA speed, density, IP content and connectivity increasingly attracted the attention of ASIC designers who previously had ignored the potential of programmability. As companies saw the advantages of moving their applications away from ASIC, the ranks of FPGA designers were filled with ex-ASIC luminaries.

It is here that the community began to broaden in multiple directions rather than continuing on its historical linear evolution path. Specialized engineers from many disciplines began to see applicability of FPGAs to their particular problems and to search for design methodologies that would fit. One at a time, their challenges forced changes and additions to the design process and mandated modifications to the core technology.

DSP designers noticed the opportunity for datapaths with a high degree of parallelism. If they were willing to endure the rigors of FPGA design, they could get many times the performance of a DSP processor using a high-performance FPGA. Vendors like Xilinx and Altera responded by including hardwired multipliers and multiply-accumulate (MAC) cells on some of their devices. Along with tool developers like AccelChip, they also began to smooth the path for the predominantly software-biased DSP crowd who were tentatively testing the FPGA design waters. Suddenly the path from MATLAB and Simulink became more important than the ability to support sophisticated HDL constructs.

Vendors like Actel and Xilinx were taking technology in another direction. Producing radiation-tolerant and extended-temperature versions of their devices, they began to cater to the high-reliability and hostile-environment crowd. Space designers in particular were elated to have an alternative to the exorbitant prices of single-digit-quantity ASIC designs. The priorities of these designers were unique, driving FPGA companies in new directions on device packaging, testing, and documentation.

At about the same time, innovative vendors started dropping processor cores (both hard-wired and soft IP) onto some of their FPGAs. This new addition, originally almost an afterthought to give microcontroller flexibility for sequential designs, was a small step for FPGA companies that required a giant leap in methodology. The introduction of processors into the IP mix meant that FPGA designs could now have software content, processor peripherals, busses, and host of new design challenges.

Embedded design with FPGAs required a completely new suite of design tools and methods that did not previously exist in the programmable logic community. Embedded debuggers, hardware/software co-design tools, platform development kits, and specialized IP blocks all had to be deployed in near-panic mode to keep up with the demands of design teams who found themselves baffled by the obstacles blocking their path to embedded platform bliss. Once again, an entirely new set of designers joined the community, this time bantering terms like “RTOS” and “middleware.” This new breed of FPGA designer had a completely different vocabulary, repertoire, and motivation than the programmable logic people of old.

In an effort to expand their markets once again, FPGA companies began to focus on one of their primary weaknesses – device cost. ASICs had always maintained a commanding advantage for high-volume applications because of their huge unit-cost advantage over FPGAs. Launching new lines of cost-optimized devices, FPGA vendors attacked high-volume markets such as consumer electronics, PC peripherals, and automotive head-on. The low-price, high-volume customer wanted it all. They needed absolute minimum cost, fast design cycles, field reprogrammability, and guaranteed delivery capabilities.

The crush of technology across the board was also having another effect. Design domains were becoming increasingly specialized and diversified. The skills and expertise required to be an effective communications infrastructure designer, for example, were moving farther and farther from those demanded of automotive telematics development. From consumer electronics to satellites to storage systems, each design discipline developed its own specialized vocabulary, techniques, tools, and IP libraries. Semiconductor vendors, EDA companies, and IP suppliers have had to race to keep up with demands.

As the cost-per-gate of FPGA technology plummeted and performance and feature-richness steadily climbed, more and more end applications found their way to programmable logic solutions. All of these groups converged in the pool of FPGA and CPLD consumers, and the result is a set of capabilities and methodologies that are unique in the industry. It will become increasingly rare to hear a comparison of the FPGA design process with ASIC, DSP, or PCB-based system implementation. Programmable logic design has come of age and is now standing on its own legs and writing its own new rules. It is taking a unique place as perhaps the most important electronic design methodology of today, and the effects on the new systems and products of tomorrow will be profound indeed.

Who will be the FPGA designers of the future? If today’s demographics are any indication, more often than not they’ll be software-trained systems designers with a working knowledge of hardware concepts, but little direct digital design expertise. Design tools that do the heavy lifting by converting high-level algorithmic and intent-based specifications into optimized implementation architectures will minimize the need for black-belt logic designers. Increasingly effective domain-specific tools and design flows will drop the time-to-market and the training barrier for system designers in a wide variety of application domains. Companies with the vision to recognize the trends early and provide robust solutions serving this process will be well rewarded with positions as the industry leaders of the next generation.

Leave a Reply

featured blogs
Apr 13, 2021
We explain the NHTSA's latest automotive cybersecurity best practices, including guidelines to protect automotive ECUs and connected vehicle technologies. The post NHTSA Shares Best Practices for Improving Autmotive Cybersecurity appeared first on From Silicon To Software....
Apr 13, 2021
If a picture is worth a thousand words, a video tells you the entire story. Cadence's subsystem SoC silicon for PCI Express (PCIe) 5.0 demo video shows you how we put together the latest... [[ Click on the title to access the full blog on the Cadence Community site. ]]...
Apr 12, 2021
The Semiconductor Ecosystem- It is the definition of '€œHigh Tech'€, but it isn'€™t just about… The post Calibre and the Semiconductor Ecosystem appeared first on Design with Calibre....
Apr 8, 2021
We all know the widespread havoc that Covid-19 wreaked in 2020. While the electronics industry in general, and connectors in particular, took an initial hit, the industry rebounded in the second half of 2020 and is rolling into 2021. Travel came to an almost stand-still in 20...

featured video

Meeting Cloud Data Bandwidth Requirements with HPC IP

Sponsored by Synopsys

As people continue to work remotely, demands on cloud data centers have never been higher. Chip designers for high-performance computing (HPC) SoCs are looking to new and innovative IP to meet their bandwidth, capacity, and security needs.

Click here for more information

featured paper

Understanding Functional Safety FIT Base Failure Rate Estimates per IEC 62380 and SN 29500

Sponsored by Texas Instruments

Functional safety standards such as IEC 61508 and ISO 26262 require semiconductor device manufacturers to address both systematic and random hardware failures. Base failure rates (BFR) quantify the intrinsic reliability of the semiconductor component while operating under normal environmental conditions. Download our white paper which focuses on two widely accepted techniques to estimate the BFR for semiconductor components; estimates per IEC Technical Report 62380 and SN 29500 respectively.

Click here to download the whitepaper

featured chalk talk

Medical Device Security

Sponsored by Siemens Digital Industries Software

In the new era of connected medical devices, securing embedded systems has become more important than ever. But, there is a lot medical device designers can borrow from current best-practices for embedded security in general. In this episode of Chalk Talk, Amelia Dalton chats with Robert Bates from Mentor about strategies and challenges for securing modern medical devices and systems.

Click here to download a whitepaper called "Medical Device Security: Achieving Regulatory Approval"