feature article
Subscribe Now

Breakout

From Hardware to Software - and Back

It was crunch time – the end of the semester in EE school – early 1980s.  I, in my usual undergraduate fashion, had procrastinated and was now slamming a unit every day on my self-badly-paced logic design course.  The course was designed so that we could complete an average of a unit per week and finish within the semester.  Each unit had a section of the textbook to study, (such as “Karnaugh Maps”), a quiz to pass, and a lab to complete (like “verifying Karnaugh simplification using TTL logic”).  I had waited until there were exactly enough days remaining in the semester so that I could complete a unit per day – every day including Saturday – and still complete the number of units required for an A grade.  There was no slack.  Each night, I studied the next unit. The next morning, I went to the exam room and took the exam on the material I’d learned the night before. Then, in the afternoon, I went to the lab and completed the lab work to finish off the unit.  After that, it was back home to do my reading for the next unit and the next morning’s exam – with a bit of sleep thrown in.  Somehow, in the midst of this activity, I was also completing the final work in my other four EE courses.

Tonight, I didn’t care.  I had finished my lab, and the reading for the next unit was beckoning.  I should have been tired, but my mind was racing with excitement.  I was on the verge of a religious experience for an aspiring logic designer.  Instead of heading home to my “study nest,” I took my car to a nearby hotel.  There, in a conference room, fourteen or fifteen members of a local computer club were gathered for their monthly meeting.  I was a guest.  I arrived early and grabbed a folding chair near the front of the small conference room.  There was an invited speaker scheduled for that evening and I didn’t want to miss even the tiniest bit.  

The speaker began by telling about his college days.  He, like me, had been an electrical engineering student.  He had also, it seems, been somewhat of a prankster. The tiny audience was in stitches as he related how he designed and built a small battery-powered device that would interfere with TV signals.  If he turned it on in a room with a TV that used “rabbit ear” antennae – the picture would fuzz out and become unwatchable.  The device was simple, but the way he applied it was devious.  He sat quietly in the back of a TV lounge in the dorm with the device.  At some point, he would turn on the device and the picture would fuzz.  If someone would slap the TV, or place a hand on it, he’d turn the device off and let the picture return to normal – thus building a myth that the TV in the lounge could be “fixed” by touching or slapping.  Over time, he evolved the required maneuver to be more and more complex, simply by waiting until someone did something unusual before restoring the picture.  At one point, he saw someone put their hand in the middle of the screen and raise one leg… Click – he quietly restored the picture.  As long as the student stood with a hand in the middle of the screen and one leg raised, he allowed the TV to work uninterrupted.

As the talk continued and the audience laughed, I found myself growing a little impatient.  The pranks were great, but I was here to hear about masterful logic design. While I was using Karnaugh maps to simplify logic expressions down to two levels, or designing simple state machines with three or four states and assembling them in the lab – this guy had been designing a full-fledged video game using TTL logic gates. While working at Hewlett Packard, this guy had moonlighted for Atari – developing a low-chip-count all-TTL arcade game called BREAKOUT – for a fee of about $350.  The thought of implementing such a complex function purely in logic – with no microprocessor, amazed me.  I wanted to develop the kind of logic design skills that had enabled this guy, Steve Wozniak, to create the original hardware design for BREAKOUT.  

Later, it seems, he had used his BREAKOUT experience to guide him in another project – the design of the Apple ][ computer.  It seems Mr Wozniak wanted to be able to play breakout at home on his own machine, rather than relying just on the arcade version. While the Apple ][ may have functioned as a general-purpose computing machine, its capabilities were assembled with a particular application in mind – the game of BREAKOUT implemented in software in the BASIC language.  Wozniak had added low-resolution color graphics (with a 40X40 array of large pixels, if memory serves correctly, that corresponded to character positions on the “all text” screen).  He had included a game controller with a knob and button to enable BREAKOUT. He had added a small speaker that could be driven from software to bring the game its sound effects.  Finally, he had included a sample BASIC version of the breakout game on cassette tape (the mass storage medium of choice for home PCs in those days).

When the Apple ][ had first come to market, we hadn’t realized that we were seeing an application-specific piece of hardware de-constructed into a general-purpose machine. We also didn’t think about the fact that a complex application that had once required expensive dedicated hardware to meet performance goals could now be implemented in software on a much less expensive platform.  This migration of hardware into software to save cost, or of software into hardware to increase performance, is dramatically evident in today’s FPGA design space.  Soft-core processors on FPGAs allow the creation of low-cost embedded computing systems, while the programmable fabric allows hardware acceleration of performance-critical functions in hardware.

Today, many FPGA families also share a similar evolutionary history to the Apple ][. Back in the marketing and engineering labs, designers look at specific potential applications for FPGAs – wireless base stations, portable display controllers, security camera drivers — the list goes on and on.  For each of these, they create sample designs and evaluate their hardware requirements.  Each design breaks down into characteristics and features that are required in an FPGA.  The amount of block RAM, number of multipliers, ratio of fabric to I/O, number and speed of high-speed serial transceivers – all can usually be traced back to the requirements of a specific target design.  By vetting a set of hardware specifications against a variety of target applications, the FPGA company is able to come up with a product table for an FPGA family that they are confident will meet the demands of a large segment of target customers.  

For design teams developing “targeted” applications, the freeway is paved smooth and wide.  Specialized development boards, correct proportions of logic and other hardware resources, 80% “done” sample reference designs, and a selection of appropriate IP blocks are waiting to speed those lucky design teams quickly and uneventfully to the finish line.  For the rest of the design community – those using the more “general purpose” attributes of FPGAs, the game can be a little more uncertain.  However, just as the most interesting applications for the Apple ][ turned out to be a far cry from BREAKOUT, the most interesting applications of FPGAs will not be those “middle of the road” designs envisioned by the marketing teams that put together the original device specifications.  Instead, they will flow from the creative genius of Woz-like engineers with their $99 development boards and free web-based software.  They will push the boundaries of what the creators of the FPGA technology thought possible, bringing us FPGA-powered systems that will form the basis of old-timer’s tales of future generations of engineers.  

This year, we’ll see an astonishing array of new capabilities announced in programmable logic – from a number of vendors.  These new technologies will once again raise the bar on what we can accomplish with programmable logic devices, giving programmable logic inroads into new application areas where other technologies previously reigned – or where no solution has ever been available.  It is up to us as engineers to bring these capabilities to life in new and innovative designs.

Leave a Reply

featured blogs
Apr 11, 2021
https://youtu.be/D29rGqkkf80 Made in "Hawaii" (camera Ziyue Zhang) Monday: Dynamic Duo 2: The Sequel Tuesday: Gall's Law and Big Ball of Mud Wednesday: Benedict Evans on Tech in 2021... [[ Click on the title to access the full blog on the Cadence Community sit...
Apr 8, 2021
We all know the widespread havoc that Covid-19 wreaked in 2020. While the electronics industry in general, and connectors in particular, took an initial hit, the industry rebounded in the second half of 2020 and is rolling into 2021. Travel came to an almost stand-still in 20...
Apr 7, 2021
We explore how EDA tools enable hyper-convergent IC designs, supporting the PPA and yield targets required by advanced 3DICs and SoCs used in AI and HPC. The post Why Hyper-Convergent Chip Designs Call for a New Approach to Circuit Simulation appeared first on From Silicon T...
Apr 5, 2021
Back in November 2019, just a few short months before we all began an enforced… The post Collaboration and innovation thrive on diversity appeared first on Design with Calibre....

featured video

Learn the basics of Hall Effect sensors

Sponsored by Texas Instruments

This video introduces Hall Effect, permanent magnets and various magnetic properties. It'll walk through the benefits of Hall Effect sensors, how Hall ICs compare to discrete Hall elements and the different types of Hall Effect sensors.

Click here for more information

featured paper

Understanding Functional Safety FIT Base Failure Rate Estimates per IEC 62380 and SN 29500

Sponsored by Texas Instruments

Functional safety standards such as IEC 61508 and ISO 26262 require semiconductor device manufacturers to address both systematic and random hardware failures. Base failure rates (BFR) quantify the intrinsic reliability of the semiconductor component while operating under normal environmental conditions. Download our white paper which focuses on two widely accepted techniques to estimate the BFR for semiconductor components; estimates per IEC Technical Report 62380 and SN 29500 respectively.

Click here to download the whitepaper

Featured Chalk Talk

Nano Pulse Control Clears Issues in the Automotive and Industrial Markets

Sponsored by Mouser Electronics and ROHM Semiconductor

In EV and industrial applications, converting from high voltages on the power side to low voltages on the electronics side poses a big challenge. In order to convert big voltage drops efficiently, you need very narrow pulse widths. In this episode of Chalk Talk, Amelia Dalton chats with Satya Dixit from ROHM about new Nano Pulse Control technology that changes the game in DC to DC conversion.

More information about ROHM Semiconductor BD9V10xMUF Buck Converters