feature article
Subscribe Now

Legacy of Languages

Culture in Code

Using VHDL or Verilog to design FPGAs is just plain wrong.

Talk with any expert in languages, in logic synthesis, in hardware architecture.  If you get past the “but that’s how we do it” layer of defenses, you’ll pretty quickly uncover a vast ocean of oddity that will make you wonder just why anyone ever considered the idea of doing FPGA design with HDLs, let alone how HDL-based design became the de-facto standard.

First, taking VHDL as an example: most of the things you can write in VHDL cannot be synthesized into hardware with any degree of efficiency.  In fact, to paraphrase one of my favorite college math professors, if you wrote all the possible VHDL on a big wall, and threw tomatoes at the wall all day long, you’d probably never hit any code that would synthesize well.  (This is a variant of what my professor called the “tomato theorem.”)  Those of us that successfully write VHDL for synthesis have learned through experience how to step on a select few hidden rocks just under the water’s surface in order to keep our footing through synthesis.   Deviate from those rocks, however, and you’re likely to take a dunk in code that won’t synthesize at all or will create hardware that’s impractically inefficient if not completely un-buildable.

Of course, VHDL wasn’t intended for designing hardware.  Yes, the D stands for “Description,” but in its original context, that “Description” was more like “Documentation.”  VHDL was intended to solve a documentation problem for the US Department of Defense.  The advent of application-specific chips had created a daunting documentation problem for both the DoD and its suppliers, and VHDL was developed to address that challenge.  Now, ASIC designs could be described in a standard, machine-readable language instead of through piles of ad-hoc paper documentation.

Once we had VHDL as a documentation language, people wanted to be able to execute it.  If you could execute your documentation, you could verify that it was functionally correct.  The first simulators were crude, clunky, and slow, and didn’t support a lot of the language.  If you wanted to simulate your documentation, you had to stay within the boundaries of what could be simulated using the technology of the time.  Eventually, of course, compiled code simulators came along and dramatically improved execution speeds, and the simulator capabilities expanded so that almost all of the language could be simulated.

Verilog didn’t start with quite the handicap of VHDL.  It was originally designed to be simulatable.  The Verilog simulator from Gateway Design Automation was a big hit with designers who didn’t want to mess with the mass of VHDL and who wanted clean, fast, language-based simulation of their logic designs.  Driven in part by both mandate and momentum, however, VHDL continued full-steam as well.  The designer population bifurcated, the lines were drawn, and by sometime in the late 1980s, the cultural seeds were planted that landed us where we are today.

Once you have a language in which you can write a machine-readable description of your ASIC design and simulate it, the natural next step is to try to convert that language into an implementation language so you can use it to directly do design work.  This is where logic synthesis comes in.  The first logic synthesis of VHDL and Verilog could take advantage of only a tiny subset of the language and, even then, only in a very specific structure.  Over time, logic synthesis tools expanded the allowable subset of the languages, the variation in structure, and the quality of the resulting logic when synthesizing from hardware description languages like VHDL and Verilog. 

Using an HDL as an implementation language locks down the semantics of the description itself.  Although both VHDL and Verilog are capable of capturing design intent or behavior to some degree, synthesizable descriptions are still relegated to little more than the description of structure.  Register transfer level (RTL) has been the de-facto abstraction level for the description of synthesizable logic for close to two decades now.

What does all this have to do with FPGAs?  Well, FPGAs have a completely different logic structure from ASICs.  In using HDLs to synthesize ASIC designs, we at least have a pretty open field for using “normal” logic elements to complete our design.  In FPGAs, however, the fabric is based upon fixed-width look-up tables (LUTs).  When we start to synthesize HDLs for FPGAs, we’re shoe-horning the mapping of logic gates into some strange combination of LUTs and other elements.  Many of the assumptions that work for ASIC mapping have to be compromised to get reasonable results in FPGA fabric.

What we have, then, is a documentation language shoe-horned into a simulation language, shoe-horned into a synthesis language, and finally shoe-horned into an FPGA technology target.  Does that sound like a lot of shoe-horning?  It is.  This Rube-Goldbergian schema has been our default design methodology for FPGAs for years now.  It is neither optimal nor justifiable for any reason other than that it takes advantage of technology we already had lying around the house – leftover from our ASIC design projects.

Now, ESL comes in and promises to revolutionize our design methodologies with — what else?  — behavioral description in languages like C and C++.  Instead of laying down LUTS using languages designed for the documentation of ASIC designs, we will use a language that is a thin layer of abstraction over a von Neumann architecture processor architecture – intended for ease-of-use in portably implementing operating system features in software.  Perhaps we are not going the right direction here?

Certainly, there are many good reasons to do hardware design in a language like C.  The number of people in the world already accomplished in C is far greater than the sum of both VHDL and Verilog masters, so just based on the number of trained designers alone, it has quite a compelling start.  Hardware is parallel in nature and C is sequential, so we have the ultra-non-trivial problem of generating parallel structures from sequential code.  However, people tend to think and develop algorithms sequentially most of the time anyway, so even if your compiler isn’t yet smart enough to do that conversion, you’ll be able to.  If we were starting from scratch to come up with the perfect language to concisely describe the desired behavior of our hardware, however, I doubt any of us would re-invent C.

The thing is, design language choice, like spoken and written language choice, is far more cultural than logical.  We don’t analyze the various languages available for communication and choose to learn and speak the one that is able to most efficiently or effectively capture and express our ideas.  Certainly if that were the case, I’d never have chosen English.  Our choice of language is based on our culture and our history.  Design language is no different.  Engineering is a culture.  We’ll never stop and re-invent ourselves with a blank slate, crafting the perfect dialect for designing our masterpieces.  Instead, we’ll follow in the footsteps of history and convention and use the quaint and quirky communications tools that we’ve inherited from our peers — and have fun doing it.

Leave a Reply

featured blogs
Jan 22, 2021
Amidst an ongoing worldwide pandemic, Samtec continues to connect with our communities. As a digital technology company, we understand the challenges and how uncertain times have been for everyone. In early 2020, Samtec Cares suspended its normal grant cycle and concentrated ...
Jan 22, 2021
I was recently introduced to the concept of a tray that quickly and easily attaches to your car'€™s steering wheel (not while you are driving, of course). What a good idea!...
Jan 22, 2021
This is my second post about this year's CES. The first was Consumer Electronics Show 2021: GM, Intel . AMD The second day of CES opened with Lisa Su, AMD's CEO, presenting. AMD announced new... [[ Click on the title to access the full blog on the Cadence Community...
Jan 20, 2021
Explore how EDA tools & proven IP accelerate the automotive design process and ensure compliance with Automotive Safety Integrity Levels & ISO requirements. The post How EDA Tools and IP Support Automotive Functional Safety Compliance appeared first on From Silicon...

featured paper

Common Design Pitfalls When Designing With Hall 2D Sensors And How To Avoid Them

Sponsored by Texas Instruments

This article discusses three widespread application issues in industrial and automotive end equipment – rotary encoding, in-plane magnetic sensing, and safety-critical – that can be solved more efficiently using devices with new features and higher performance. We will discuss in which end products these applications can be found and also provide a comparison with our traditional digital Hall-effect sensors showing how the new releases complement our existing portfolio.

Click here to download the whitepaper

featured chalk talk

Medical Device Security

Sponsored by Mentor

In the new era of connected medical devices, securing embedded systems has become more important than ever. But, there is a lot medical device designers can borrow from current best-practices for embedded security in general. In this episode of Chalk Talk, Amelia Dalton chats with Robert Bates from Mentor about strategies and challenges for securing modern medical devices and systems.

Click here to download a whitepaper called "Medical Device Security: Achieving Regulatory Approval"