feature article
Subscribe Now

HDL is Dead

Long Live HDL

The mid-term exam was straightforward.  We were given five specifications, and we had to write a program for each.  Each program would be worth 20 points, for a total of 100 on the exam.  The exam was 40% of our final grade in the class.  Oh, I almost forgot – the language was Motorola 6809 assembly, and we had to hand-assemble the programs and submit machine code.  The exam was on paper (in a blue book), and no computers or calculators could be used.

 

As I was taking the exam, it occurred to me that the only thing harder than taking this test would be grading it.  Seriously!  Would a graduate student who was working as a teaching assistant stay up for several days straight looking blurry-eyed at rows and columns of op codes, trying to figure out whether this program did what the specification asked, on exam after exam after exam – with 5 programs on each?

 

It turns out the answer was “no.”

 

I was one of the best students in the class.  I had done a lot of microprocessor programming on my own, and I was confident in my skills.  The exam programs were straightforward, and I crafted (modestly speaking, of course) very elegant solutions for all five of them.  I finished first, in 5 minutes more than the allotted time, and I couldn’t imagine how some of the less experienced students could have possibly finished.  Still, I was very confident when I handed in my exam.

 

Two weeks later, the tests came back – the professor handed them out unceremoniously and without comment.  I opened the blue book and found my grade… “20%”

 

What?

 

Looking more closely, four of my five programs were marked with a big red “X” and a “-20”  One was marked with a check mark and a “20.”  Well, that’s straightforward enough.  There was no other notation on the exam – no hint of why four of five programs were marked wrong, no partial credit, nothing.

 

I was upset, to say the least, but I wanted to have my facts straight first.  I walked down to the lab and entered each of my five programs (even the correct one) on a development system.  They all worked perfectly.  I carefully re-read the questions/specifications and couldn’t find anything I’d missed.  I decided I was ready for the trip to the professor’s office.

 

I made an appointment to see the professor.  I arrived early and waited outside his office.  He arrived about twenty minutes late.  He seemed distracted as I explained that I’d like him to review my exam.  “Well, give it here…” he sounded annoyed.  He looked at the first page and traced through my answer… “Ok, ok, ok – good, that’s more compact than my example solution…  yeah.  You’re right.  This is good.”  He scratched out the -20 and wrote “40” on the front cover, then quickly handed me back the book and started to look at other work on his desk…  

 

“Um… actually there are several more I had the same question about.  Could you look at those as well?”

 

Now he really seemed annoyed.  “OK, ok, give it back.”

 

He paged through the rest of the exam and, in each case, decided that my solution was correct.  He added a “+20” note to each of the remaining three questions.  At the end, he went through and totaled them up, scratched out the “40” on the front cover and wrote in “100” and handed me back the exam.  “There ya go.  Thanks.”

 

I asked if he maybe needed to change the grade in his grade book, and not just on my copy of the exam.  Apparently, this had never crossed his mind.  With no emotion other than an apparent annoyance at my continued presence in his office, he pulled out a pad of paper and made a note – presumably to remind himself to tell the appropriate teaching assistant to change the grade of… “What was your name again?” from a “Let’s see – 20% to 100%, hmm.”

 

There was no apparent concern on his part spawned by the fact that his grading system had under-rated my performance by a factor of five.  In the grading criteria for the exam, we were reminded that our solution to each problem had to execute within 20% of the speed of the “example” solution, and our memory footprint had to be within 20% of that of the example solution.  It seems that the same 20% tolerance was not required of the grading system.  Also unspoken was the grading process itself, which must have consisted of comparing the student’s program with the example solution, giving full credit if the op codes matched 1:1 and giving no credit if the programs did not match.  Four of my five programs were both smaller and faster than the example solution.  The remaining program is the one originally marked “correct.”

 

I left the office both encouraged and dismayed.  I was encouraged that my grade had jumped from the lowest echelons of failure to the maximum possible score, and dismayed by the utter indifference shown by the professor about the dramatic failure of his grading system.   In speaking with my peers, I determined that the class was divided into two groups – those who failed the exam miserably, and those who initially failed the exam miserably, but went to the professor to have their grade raised and thus passed.  

 

In order to process these events, I had to back away and survey the bigger picture.  Maybe the real lesson of this part of the course was not microprocessor programming.  Perhaps it taught others what it taught me – always question authority, never trust the system completely, always do your best work – and always be prepared to defend it.  These lessons have proven far more valuable in my career than the 6809 op code for CMPA – 81 Hexadecimal.  

 

A second learning from this course was that, although hand assembly was the purview of “real” programmers (and was still the “official” way to program microprocessors), these methods did not represent the future of software development.  Higher-level languages and methodologies would evolve that would favor productivity over +/- 20% optimization.  Eventually, even the optimization argument would fail as automated compilers did a superior job optimizing large, complex algorithms than a typical human could manage with manual machine coding techniques.

 

Today, HDLs are the official languages of FPGA development.  There is no doubt that “real” FPGA designers write HDL and will continue to do so for some time.  However, just as my hand assembly skills became obsolete, so may the entities and architectures of outrageous fortune soon abandon our daily design lives.  Already, most designs use some amount of encapsulated IP – HDL snippets squirreled away where the end-user’s eyes will never venture.  Every day, more designers enter the FPGA arena via alternative entry points like graphical block diagram editors, high-level language compilers, pre-optimized design blocks, and any number of other means that bypass the signals and registers of conventional RTL design.  

 

Instead of lamenting the loss of +/- 20% optimality falsely conveyed by our current designs styles, we need to back away and survey the bigger picture.  Productivity will always eventually trump optimality.  Design will move from the realm of the highly-skilled bit-by-bit programmers to the passing interest of the slap-it-together-from-pre-defined-blocks mentality of the system-level tourist.  The real lessons of our design experience will remain with us long after we forget the proper way to infer block RAM. We just need to open our minds to them.

Leave a Reply

featured blogs
Jul 6, 2020
If you were in the possession of one of these bodacious beauties, what sorts of games and effects would you create using the little scamp?...
Jul 3, 2020
[From the last episode: We looked at CNNs for vision as well as other neural networks for other applications.] We'€™re going to take a quick detour into math today. For those of you that have done advanced math, this may be a review, or it might even seem to be talking down...
Jul 2, 2020
In June, we continued to upgrade several key pieces of content across the website, including more interactive product explorers on several pages and a homepage refresh. We also made a significant update to our product pages which allows logged-in users to see customer-specifi...

Featured Video

Product Update: Advances in DesignWare Die-to-Die PHY IP

Sponsored by Synopsys

Hear the latest about Synopsys' DesignWare Die-to-Die PHY IP for SerDes-based 112G USR/XSR and parallel-based HBI interfaces. The IP, available in advanced FinFET processes, addresses the power, bandwidth, and latency requirements of high-performance computing SoCs targeting hyperscale data center, AI, and networking applications.

Click here for more information about DesignWare Die-to-Die PHY IP Solutions

Featured Paper

Cryptography: Fundamentals on the Modern Approach

Sponsored by Maxim Integrated

Learn about the fundamental concepts behind modern cryptography, including how symmetric and asymmetric keys work to achieve confidentiality, identification and authentication, integrity, and non-repudiation.

Click here to download the whitepaper

Featured Chalk Talk

Evaluation and Development Kits

Sponsored by Samtec

With signal integrity becoming increasingly challenging in today’s designs, interconnect is taking on a key role. In order to see how a particular interconnect solution will perform in our design, we really need hands-on evaluation of the technology. In this episode of Chalk Talk, Amelia Dalton chats with Matthew Burns of Samtec about evaluation and development kits for high-speed interconnect solutions.

More information about Samtec Evaluation and Development Kits