feature article
Subscribe Now

Legacy of Languages

Culture in Code

Using VHDL or Verilog to design FPGAs is just plain wrong.

Talk with any expert in languages, in logic synthesis, in hardware architecture.  If you get past the “but that’s how we do it” layer of defenses, you’ll pretty quickly uncover a vast ocean of oddity that will make you wonder just why anyone ever considered the idea of doing FPGA design with HDLs, let alone how HDL-based design became the de-facto standard.

First, taking VHDL as an example: most of the things you can write in VHDL cannot be synthesized into hardware with any degree of efficiency.  In fact, to paraphrase one of my favorite college math professors, if you wrote all the possible VHDL on a big wall, and threw tomatoes at the wall all day long, you’d probably never hit any code that would synthesize well.  (This is a variant of what my professor called the “tomato theorem.”)  Those of us that successfully write VHDL for synthesis have learned through experience how to step on a select few hidden rocks just under the water’s surface in order to keep our footing through synthesis.   Deviate from those rocks, however, and you’re likely to take a dunk in code that won’t synthesize at all or will create hardware that’s impractically inefficient if not completely un-buildable.

Of course, VHDL wasn’t intended for designing hardware.  Yes, the D stands for “Description,” but in its original context, that “Description” was more like “Documentation.”  VHDL was intended to solve a documentation problem for the US Department of Defense.  The advent of application-specific chips had created a daunting documentation problem for both the DoD and its suppliers, and VHDL was developed to address that challenge.  Now, ASIC designs could be described in a standard, machine-readable language instead of through piles of ad-hoc paper documentation.

Once we had VHDL as a documentation language, people wanted to be able to execute it.  If you could execute your documentation, you could verify that it was functionally correct.  The first simulators were crude, clunky, and slow, and didn’t support a lot of the language.  If you wanted to simulate your documentation, you had to stay within the boundaries of what could be simulated using the technology of the time.  Eventually, of course, compiled code simulators came along and dramatically improved execution speeds, and the simulator capabilities expanded so that almost all of the language could be simulated.

Verilog didn’t start with quite the handicap of VHDL.  It was originally designed to be simulatable.  The Verilog simulator from Gateway Design Automation was a big hit with designers who didn’t want to mess with the mass of VHDL and who wanted clean, fast, language-based simulation of their logic designs.  Driven in part by both mandate and momentum, however, VHDL continued full-steam as well.  The designer population bifurcated, the lines were drawn, and by sometime in the late 1980s, the cultural seeds were planted that landed us where we are today.

Once you have a language in which you can write a machine-readable description of your ASIC design and simulate it, the natural next step is to try to convert that language into an implementation language so you can use it to directly do design work.  This is where logic synthesis comes in.  The first logic synthesis of VHDL and Verilog could take advantage of only a tiny subset of the language and, even then, only in a very specific structure.  Over time, logic synthesis tools expanded the allowable subset of the languages, the variation in structure, and the quality of the resulting logic when synthesizing from hardware description languages like VHDL and Verilog. 

Using an HDL as an implementation language locks down the semantics of the description itself.  Although both VHDL and Verilog are capable of capturing design intent or behavior to some degree, synthesizable descriptions are still relegated to little more than the description of structure.  Register transfer level (RTL) has been the de-facto abstraction level for the description of synthesizable logic for close to two decades now.

What does all this have to do with FPGAs?  Well, FPGAs have a completely different logic structure from ASICs.  In using HDLs to synthesize ASIC designs, we at least have a pretty open field for using “normal” logic elements to complete our design.  In FPGAs, however, the fabric is based upon fixed-width look-up tables (LUTs).  When we start to synthesize HDLs for FPGAs, we’re shoe-horning the mapping of logic gates into some strange combination of LUTs and other elements.  Many of the assumptions that work for ASIC mapping have to be compromised to get reasonable results in FPGA fabric.

What we have, then, is a documentation language shoe-horned into a simulation language, shoe-horned into a synthesis language, and finally shoe-horned into an FPGA technology target.  Does that sound like a lot of shoe-horning?  It is.  This Rube-Goldbergian schema has been our default design methodology for FPGAs for years now.  It is neither optimal nor justifiable for any reason other than that it takes advantage of technology we already had lying around the house – leftover from our ASIC design projects.

Now, ESL comes in and promises to revolutionize our design methodologies with — what else?  — behavioral description in languages like C and C++.  Instead of laying down LUTS using languages designed for the documentation of ASIC designs, we will use a language that is a thin layer of abstraction over a von Neumann architecture processor architecture – intended for ease-of-use in portably implementing operating system features in software.  Perhaps we are not going the right direction here?

Certainly, there are many good reasons to do hardware design in a language like C.  The number of people in the world already accomplished in C is far greater than the sum of both VHDL and Verilog masters, so just based on the number of trained designers alone, it has quite a compelling start.  Hardware is parallel in nature and C is sequential, so we have the ultra-non-trivial problem of generating parallel structures from sequential code.  However, people tend to think and develop algorithms sequentially most of the time anyway, so even if your compiler isn’t yet smart enough to do that conversion, you’ll be able to.  If we were starting from scratch to come up with the perfect language to concisely describe the desired behavior of our hardware, however, I doubt any of us would re-invent C.

The thing is, design language choice, like spoken and written language choice, is far more cultural than logical.  We don’t analyze the various languages available for communication and choose to learn and speak the one that is able to most efficiently or effectively capture and express our ideas.  Certainly if that were the case, I’d never have chosen English.  Our choice of language is based on our culture and our history.  Design language is no different.  Engineering is a culture.  We’ll never stop and re-invent ourselves with a blank slate, crafting the perfect dialect for designing our masterpieces.  Instead, we’ll follow in the footsteps of history and convention and use the quaint and quirky communications tools that we’ve inherited from our peers — and have fun doing it.

Leave a Reply

featured blogs
Sep 21, 2023
Wireless communication in workplace wearables protects and boosts the occupational safety and productivity of industrial workers and front-line teams....
Sep 21, 2023
Labforge is a Waterloo, Ontario-based company that designs, builds, and manufactures smart cameras used in industrial automation and defense applications. By bringing artificial intelligence (AI) into their vision systems with Cadence , they can automate tasks that are diffic...
Sep 21, 2023
At Qualcomm AI Research, we are working on applications of generative modelling to embodied AI and robotics, in order to enable more capabilities in robotics....
Sep 21, 2023
Not knowing all the stuff I don't know didn't come easy. I've had to read a lot of books to get where I am....
Sep 21, 2023
See how we're accelerating the multi-die system chip design flow with partner Samsung Foundry, making it easier to meet PPA and time-to-market goals.The post Samsung Foundry and Synopsys Accelerate Multi-Die System Design appeared first on Chip Design....

featured video

TDK PowerHap Piezo Actuators for Ideal Haptic Feedback

Sponsored by TDK

The PowerHap product line features high acceleration and large forces in a very compact design, coupled with a short response time. TDK’s piezo actuators also offers good sensing functionality by using the inverse piezo effect. Typical applications for the include automotive displays, smartphones and tablet.

Click here for more information about PowerHap Piezo Actuators

featured paper

An Automated Method for Adding Resiliency to Mission-Critical SoC Designs

Sponsored by Synopsys

Adding safety measures to SoC designs in the form of radiation-hardened elements or redundancy is essential in making mission-critical applications in the A&D, cloud, automotive, robotics, medical, and IoT industries more resilient against random hardware failures that occur. This paper discusses the automated process of implementing the safety mechanisms/measures (SM) in the design to make them more resilient and analyze their effectiveness from design inception to the final product.

Click here to read more

featured chalk talk

Nexperia Energy Harvesting Solutions
Sponsored by Mouser Electronics and Nexperia
Energy harvesting is a great way to ensure a sustainable future of electronics by eliminating batteries and e-waste. In this episode of Chalk Talk, Amelia Dalton and Rodrigo Mesquita from Nexperia explore the process of designing in energy harvesting and why Nexperia’s inductor-less PMICs are an energy harvesting game changer for wearable technology, sensor-based applications, and more!
May 9, 2023
17,640 views