feature article
Subscribe Now

Breakout

From Hardware to Software - and Back

It was crunch time – the end of the semester in EE school – early 1980s.  I, in my usual undergraduate fashion, had procrastinated and was now slamming a unit every day on my self-badly-paced logic design course.  The course was designed so that we could complete an average of a unit per week and finish within the semester.  Each unit had a section of the textbook to study, (such as “Karnaugh Maps”), a quiz to pass, and a lab to complete (like “verifying Karnaugh simplification using TTL logic”).  I had waited until there were exactly enough days remaining in the semester so that I could complete a unit per day – every day including Saturday – and still complete the number of units required for an A grade.  There was no slack.  Each night, I studied the next unit. The next morning, I went to the exam room and took the exam on the material I’d learned the night before. Then, in the afternoon, I went to the lab and completed the lab work to finish off the unit.  After that, it was back home to do my reading for the next unit and the next morning’s exam – with a bit of sleep thrown in.  Somehow, in the midst of this activity, I was also completing the final work in my other four EE courses.

Tonight, I didn’t care.  I had finished my lab, and the reading for the next unit was beckoning.  I should have been tired, but my mind was racing with excitement.  I was on the verge of a religious experience for an aspiring logic designer.  Instead of heading home to my “study nest,” I took my car to a nearby hotel.  There, in a conference room, fourteen or fifteen members of a local computer club were gathered for their monthly meeting.  I was a guest.  I arrived early and grabbed a folding chair near the front of the small conference room.  There was an invited speaker scheduled for that evening and I didn’t want to miss even the tiniest bit.  

The speaker began by telling about his college days.  He, like me, had been an electrical engineering student.  He had also, it seems, been somewhat of a prankster. The tiny audience was in stitches as he related how he designed and built a small battery-powered device that would interfere with TV signals.  If he turned it on in a room with a TV that used “rabbit ear” antennae – the picture would fuzz out and become unwatchable.  The device was simple, but the way he applied it was devious.  He sat quietly in the back of a TV lounge in the dorm with the device.  At some point, he would turn on the device and the picture would fuzz.  If someone would slap the TV, or place a hand on it, he’d turn the device off and let the picture return to normal – thus building a myth that the TV in the lounge could be “fixed” by touching or slapping.  Over time, he evolved the required maneuver to be more and more complex, simply by waiting until someone did something unusual before restoring the picture.  At one point, he saw someone put their hand in the middle of the screen and raise one leg… Click – he quietly restored the picture.  As long as the student stood with a hand in the middle of the screen and one leg raised, he allowed the TV to work uninterrupted.

As the talk continued and the audience laughed, I found myself growing a little impatient.  The pranks were great, but I was here to hear about masterful logic design. While I was using Karnaugh maps to simplify logic expressions down to two levels, or designing simple state machines with three or four states and assembling them in the lab – this guy had been designing a full-fledged video game using TTL logic gates. While working at Hewlett Packard, this guy had moonlighted for Atari – developing a low-chip-count all-TTL arcade game called BREAKOUT – for a fee of about $350.  The thought of implementing such a complex function purely in logic – with no microprocessor, amazed me.  I wanted to develop the kind of logic design skills that had enabled this guy, Steve Wozniak, to create the original hardware design for BREAKOUT.  

Later, it seems, he had used his BREAKOUT experience to guide him in another project – the design of the Apple ][ computer.  It seems Mr Wozniak wanted to be able to play breakout at home on his own machine, rather than relying just on the arcade version. While the Apple ][ may have functioned as a general-purpose computing machine, its capabilities were assembled with a particular application in mind – the game of BREAKOUT implemented in software in the BASIC language.  Wozniak had added low-resolution color graphics (with a 40X40 array of large pixels, if memory serves correctly, that corresponded to character positions on the “all text” screen).  He had included a game controller with a knob and button to enable BREAKOUT. He had added a small speaker that could be driven from software to bring the game its sound effects.  Finally, he had included a sample BASIC version of the breakout game on cassette tape (the mass storage medium of choice for home PCs in those days).

When the Apple ][ had first come to market, we hadn’t realized that we were seeing an application-specific piece of hardware de-constructed into a general-purpose machine. We also didn’t think about the fact that a complex application that had once required expensive dedicated hardware to meet performance goals could now be implemented in software on a much less expensive platform.  This migration of hardware into software to save cost, or of software into hardware to increase performance, is dramatically evident in today’s FPGA design space.  Soft-core processors on FPGAs allow the creation of low-cost embedded computing systems, while the programmable fabric allows hardware acceleration of performance-critical functions in hardware.

Today, many FPGA families also share a similar evolutionary history to the Apple ][. Back in the marketing and engineering labs, designers look at specific potential applications for FPGAs – wireless base stations, portable display controllers, security camera drivers — the list goes on and on.  For each of these, they create sample designs and evaluate their hardware requirements.  Each design breaks down into characteristics and features that are required in an FPGA.  The amount of block RAM, number of multipliers, ratio of fabric to I/O, number and speed of high-speed serial transceivers – all can usually be traced back to the requirements of a specific target design.  By vetting a set of hardware specifications against a variety of target applications, the FPGA company is able to come up with a product table for an FPGA family that they are confident will meet the demands of a large segment of target customers.  

For design teams developing “targeted” applications, the freeway is paved smooth and wide.  Specialized development boards, correct proportions of logic and other hardware resources, 80% “done” sample reference designs, and a selection of appropriate IP blocks are waiting to speed those lucky design teams quickly and uneventfully to the finish line.  For the rest of the design community – those using the more “general purpose” attributes of FPGAs, the game can be a little more uncertain.  However, just as the most interesting applications for the Apple ][ turned out to be a far cry from BREAKOUT, the most interesting applications of FPGAs will not be those “middle of the road” designs envisioned by the marketing teams that put together the original device specifications.  Instead, they will flow from the creative genius of Woz-like engineers with their $99 development boards and free web-based software.  They will push the boundaries of what the creators of the FPGA technology thought possible, bringing us FPGA-powered systems that will form the basis of old-timer’s tales of future generations of engineers.  

This year, we’ll see an astonishing array of new capabilities announced in programmable logic – from a number of vendors.  These new technologies will once again raise the bar on what we can accomplish with programmable logic devices, giving programmable logic inroads into new application areas where other technologies previously reigned – or where no solution has ever been available.  It is up to us as engineers to bring these capabilities to life in new and innovative designs.

Leave a Reply

featured blogs
Apr 16, 2024
In today's semiconductor era, every minute, you always look for the opportunity to enhance your skills and learning growth and want to keep up to date with the technology. This could mean you would also like to get hold of the small concepts behind the complex chip desig...
Apr 11, 2024
See how Achronix used our physical verification tools to accelerate the SoC design and verification flow, boosting chip design productivity w/ cloud-based EDA.The post Achronix Achieves 5X Faster Physical Verification for Full SoC Within Budget with Synopsys Cloud appeared ...
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Achieving Reliable Wireless IoT
Wireless connectivity is one of the most important aspects of any IoT design. In this episode of Chalk Talk, Amelia Dalton and Brandon Oakes from CEL discuss the best practices for achieving reliable wireless connectivity for IoT. They examine the challenges of IoT wireless connectivity, the factors engineers should keep in mind when choosing a wireless solution, and how you can utilize CEL wireless connectivity technologies in your next design.
Nov 28, 2023
18,877 views