feature article
Subscribe Now

A History of Early Microcontrollers, Part 8: The Intel 8051

Intel introduced the successor to its 8048 microcontroller, the 8051, in 1980. It’s become the immortal microcontroller, and it was all because an applications engineer forgot to bring his wallet to work one day and asked his boss at Intel to buy lunch.

Intel announced the 8048 microcontroller in 1976. The design’s largest weakness, limited memory addressability, reared its head within the first year. In one sense, that’s a great problem to have because it suggests that customers wanted even more of a good thing. Intel sold $7 million worth of 8048 and 8748 microcontrollers in 1977 and the forecast was $70 million by 1980. The 8048 was popular.

On the other hand, the 8048’s limited address space had been baked into the architecture and instruction set. A bank-switching bit in a register doubled the size of the microcontroller’s program and address spaces, but that was a kludge of a fix. People working at Intel quickly learned that the 8048 architecture needed to be revamped if the company wanted to address an even larger market for microcontrollers. That revamped architecture needed to be more tailored for future growth, without the limitations that had been built into the 8048 in the name of expediency or for cost reasons.

By 1977, John Wharton had been working as an Intel applications engineer for about a year. He’d started by helping Intel customers design systems around the company’s 8085 microprocessor but soon specialized in designs for the 8048 microcontroller, so Wharton was intimately familiar with all of the 8048 architecture’s shortcomings.

Wharton arrived at work one day in December 1977 and realized he’d forgotten his wallet. If he wanted to eat lunch, he’d need to find someone to buy lunch for him. He went to his boss, Lionel Smith, and said, “I left my wallet at home. I’m broke. Can you take me to lunch today?” Smith said that he couldn’t, because he had a lunch meeting scheduled to discuss the architectural successor to the 8048. However, said Smith, “They always have sandwiches there and there’s always food left over, so why don’t you just come along, and you can hide in the back and nibble on whatever is left?” Wharton agreed and attended the meeting because he was hungry, and it was time for lunch. Wharton didn’t know it at the time, but there was a year-end deadline to decide on the replacement architecture for the 8048 and the meeting he was about to attend was a critical one for the 8048.

Wharton describes that meeting in an oral history:

“I may have not gotten the details quite right because I was more interested in the food and whether the potato salad would run out before it reached my end of the table and so forth. But they were talking about various offshoots of the 8048. Lower cost versions of the ‘48, lower power versions, ways of enhancing the 8048 architecture, a 16-bit machine that may be in the works, that sort of thing.

“For the 8048, what they had determined was that the logical growth seemed to be to expand memory on chip and also to expand some peripherals on chip. The original [80]48 had a 1K on-chip program memory that was followed about a year and a half later by an 8049 with 2 kilobytes of on-chip program memory and 128 bytes of RAM versus 64 bytes of RAM.

“The logical next step was to turn the crank one more time, double the memory another time, go to 4K of RAM, go to 256 bytes of RAM, which would have totally filled the address space and would have been the end of the line for this product. But there was customer demand for some additional peripherals as well. There was a need for additional timers, some sort of a serial port. A lot of the discussion was focusing on what sorts of peripherals to include and what sorts of trade-offs to make.

“Because the 8048 had initially been designed to solve the immediate problem just to get everything onto one chip. It was a remarkable product in that it was able to work at all. But there was sort of a mindset in that era that you’d figure out what the hardware facilities would be and then almost as an afterthought come up with an instruction set that gives you sufficient access to all of the things that the chip designers provided.

“So, for expanding the 8049 to the next chip, which would logically have been called the 8050. The plan was to sort of retrofit the ‘48 instruction set and add bank-switching instructions in order to increase the address space. Add an I/O switching instruction to get you to a second timer instead of a first timer. Various things that could be done in order to fill out the product, but that would seem to be the end of the line.”

The next afternoon, Wharton had his usual 1-on-1 meeting with his boss. Smith asked, “So what did you think of the lunch meeting yesterday?” Wharton said that the architectural changes being discussed were not the ones he’d implement if he were designing a successor to the 8048. “Why not?” asked Smith. Wharton replied:

“Well, because in the design work I’ve done, and in talking with customers and so forth with the 8048, the problems that I run into aren’t being addressed by this upgrade. If all you’re trying to do is squeeze more features into a package that’s already kind of tight, you have to do that by deleting something that’s already there or by making the product a little more difficult to use.”

What Wharton meant was that there were nearly ten variations of the 8048 by the end of 1977 and they were all a little different. To fit into the 8048’s limited 8-bit instruction space, instructions that were added to implement new features had to replace instructions for features that had been deleted in some microcontroller variants. The deleted and added instructions made the 8048 variants somewhat incompatible, which complicated code portability and made it hard to change a design from one 8048 variant to another.

Smith realized that he had exactly the right person to define the next version of Intel’s microcontroller architecture sitting across from him. He asked Wharton to develop a new microcontroller architecture that overcame the 8048’s shortcomings. Wharton started on a Friday and took a total of three days over the weekend to develop and deliver an architecture proposal the following Monday. After discussions that included a lot of Intel’s famous “constructive confrontation” but few substantive changes, Wharton’s architecture essentially became the Intel 8051 – the microcontroller that would not die.

Intel started delivering samples of the 8051 microcontroller in 1980. One of the major differences between the 8051 and the 8048 is that in-circuit emulation had become important due to the growing size of the processors’ program space and the growing complexity of the target applications. Intel made a bond-out version of the 8051, called the 8051E, that brought out the internal address and data buses and the control signals needed to develop an in-circuit emulator.

In addition, the 8051’s basic chip layout was designed from the beginning to make it easy to drop a ROM or an EPROM into the space reserved for program memory. EPROM cells are much larger than ROM cells, so one side of the 8051’s physical layout had to be pushed out to make room for the EPROM, but this strategy proved very effective in getting all the parts out of the fab in short order.

The 8051 proved to be a monster hit for Intel. Unit shipments climbed to billions of units per year, and Intel sold the 8051 microcontrollers for several decades. In 1998, Wharton surveyed semiconductor vendors that had adopted the 8051 architecture and found five major vendors offering microcontrollers based on the 8051 design. In aggregate, these vendors offered more than 200 variants of the device. In 2006, Wharton attended the Embedded Systems Conference in California where he picked up a flyer from Keil Software, which offered software development tools for the 8051. That flyer listed more than 60 companies offering more than one thousand different 8051 variants. By any measure, the 8051 microcontroller was extremely successful, and it continued to sell well a quarter of a century after it was introduced.

In the 8051 panel’s oral history, Wharton explained the 8051’s longevity this way:

“In the embedded control marketplace, what we’re doing is controlling the world, interacting with the world, interacting with human beings, interacting with machinery, turning motors on and off, gasoline pumps, cash registers, keyboards, cell phones, digital cameras, and in these markets what you’re doing is a process that’s primarily controlling items, looking at inputs, making decisions, controlling outputs, but you’re doing it at real world speeds, and the real world hasn’t changed much. People type about the same speed as they did thirty years ago, so if something was adequate for typewriters thirty years ago it still works.”

John Wharton passed away in 2018 but his most successful creation lives on, a testament to what can happen when a product is defined by an experienced and observant applications engineer instead of by “expert” processor architects working in a factory. You’ll still find 8051 microcontrollers in current products ranging from packaged microcontroller chips in computer mice to microcontroller IP cores integrated into Bluetooth chips. Many real-world microcontroller applications don’t require any more capability from a microcontroller than what’s available from an 8051. They didn’t in 1980, and they still don’t today.

If you have an 8051 story, please feel free to write about it in the comments below. I am sure there are hundreds of stories waiting to be told.

 

References

Intel 8051 Microprocessor Oral History Panel, Computer history Museum, September 16, 2008.

3 thoughts on “A History of Early Microcontrollers, Part 8: The Intel 8051”

Leave a Reply

featured blogs
Apr 19, 2024
In today's rapidly evolving digital landscape, staying at the cutting edge is crucial to success. For MaxLinear, bridging the gap between firmware and hardware development has been pivotal. All of the company's products solve critical communication and high-frequency analysis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Electrical Connectors for Hermetically Sealed Applications
Sponsored by Mouser Electronics and Bel
Many hermetic chambers today require electrical pathways to provide internal equipment with power, data or signals, or to receive data and signals from equipment within the chamber. In this episode of Chalk Talk, Amelia Dalton and Brad Taras from Cinch Connectivity Solutions explore the role that seals and connectors play in the performance of hermetic chambers. They examine the methodologies to determine hermetic seal leaks, the benefits of epoxy hermetic seals, and how Cinch Connectivity’s epoxy-based seals and hermetic connectors can add value to your next design.
Aug 22, 2023
28,688 views