feature article
Subscribe Now

A History of Early Microcontrollers, Part 8: The Intel 8051

Intel introduced the successor to its 8048 microcontroller, the 8051, in 1980. It’s become the immortal microcontroller, and it was all because an applications engineer forgot to bring his wallet to work one day and asked his boss at Intel to buy lunch.

Intel announced the 8048 microcontroller in 1976. The design’s largest weakness, limited memory addressability, reared its head within the first year. In one sense, that’s a great problem to have because it suggests that customers wanted even more of a good thing. Intel sold $7 million worth of 8048 and 8748 microcontrollers in 1977 and the forecast was $70 million by 1980. The 8048 was popular.

On the other hand, the 8048’s limited address space had been baked into the architecture and instruction set. A bank-switching bit in a register doubled the size of the microcontroller’s program and address spaces, but that was a kludge of a fix. People working at Intel quickly learned that the 8048 architecture needed to be revamped if the company wanted to address an even larger market for microcontrollers. That revamped architecture needed to be more tailored for future growth, without the limitations that had been built into the 8048 in the name of expediency or for cost reasons.

By 1977, John Wharton had been working as an Intel applications engineer for about a year. He’d started by helping Intel customers design systems around the company’s 8085 microprocessor but soon specialized in designs for the 8048 microcontroller, so Wharton was intimately familiar with all of the 8048 architecture’s shortcomings.

Wharton arrived at work one day in December 1977 and realized he’d forgotten his wallet. If he wanted to eat lunch, he’d need to find someone to buy lunch for him. He went to his boss, Lionel Smith, and said, “I left my wallet at home. I’m broke. Can you take me to lunch today?” Smith said that he couldn’t, because he had a lunch meeting scheduled to discuss the architectural successor to the 8048. However, said Smith, “They always have sandwiches there and there’s always food left over, so why don’t you just come along, and you can hide in the back and nibble on whatever is left?” Wharton agreed and attended the meeting because he was hungry, and it was time for lunch. Wharton didn’t know it at the time, but there was a year-end deadline to decide on the replacement architecture for the 8048 and the meeting he was about to attend was a critical one for the 8048.

Wharton describes that meeting in an oral history:

“I may have not gotten the details quite right because I was more interested in the food and whether the potato salad would run out before it reached my end of the table and so forth. But they were talking about various offshoots of the 8048. Lower cost versions of the ‘48, lower power versions, ways of enhancing the 8048 architecture, a 16-bit machine that may be in the works, that sort of thing.

“For the 8048, what they had determined was that the logical growth seemed to be to expand memory on chip and also to expand some peripherals on chip. The original [80]48 had a 1K on-chip program memory that was followed about a year and a half later by an 8049 with 2 kilobytes of on-chip program memory and 128 bytes of RAM versus 64 bytes of RAM.

“The logical next step was to turn the crank one more time, double the memory another time, go to 4K of RAM, go to 256 bytes of RAM, which would have totally filled the address space and would have been the end of the line for this product. But there was customer demand for some additional peripherals as well. There was a need for additional timers, some sort of a serial port. A lot of the discussion was focusing on what sorts of peripherals to include and what sorts of trade-offs to make.

“Because the 8048 had initially been designed to solve the immediate problem just to get everything onto one chip. It was a remarkable product in that it was able to work at all. But there was sort of a mindset in that era that you’d figure out what the hardware facilities would be and then almost as an afterthought come up with an instruction set that gives you sufficient access to all of the things that the chip designers provided.

“So, for expanding the 8049 to the next chip, which would logically have been called the 8050. The plan was to sort of retrofit the ‘48 instruction set and add bank-switching instructions in order to increase the address space. Add an I/O switching instruction to get you to a second timer instead of a first timer. Various things that could be done in order to fill out the product, but that would seem to be the end of the line.”

The next afternoon, Wharton had his usual 1-on-1 meeting with his boss. Smith asked, “So what did you think of the lunch meeting yesterday?” Wharton said that the architectural changes being discussed were not the ones he’d implement if he were designing a successor to the 8048. “Why not?” asked Smith. Wharton replied:

“Well, because in the design work I’ve done, and in talking with customers and so forth with the 8048, the problems that I run into aren’t being addressed by this upgrade. If all you’re trying to do is squeeze more features into a package that’s already kind of tight, you have to do that by deleting something that’s already there or by making the product a little more difficult to use.”

What Wharton meant was that there were nearly ten variations of the 8048 by the end of 1977 and they were all a little different. To fit into the 8048’s limited 8-bit instruction space, instructions that were added to implement new features had to replace instructions for features that had been deleted in some microcontroller variants. The deleted and added instructions made the 8048 variants somewhat incompatible, which complicated code portability and made it hard to change a design from one 8048 variant to another.

Smith realized that he had exactly the right person to define the next version of Intel’s microcontroller architecture sitting across from him. He asked Wharton to develop a new microcontroller architecture that overcame the 8048’s shortcomings. Wharton started on a Friday and took a total of three days over the weekend to develop and deliver an architecture proposal the following Monday. After discussions that included a lot of Intel’s famous “constructive confrontation” but few substantive changes, Wharton’s architecture essentially became the Intel 8051 – the microcontroller that would not die.

Intel started delivering samples of the 8051 microcontroller in 1980. One of the major differences between the 8051 and the 8048 is that in-circuit emulation had become important due to the growing size of the processors’ program space and the growing complexity of the target applications. Intel made a bond-out version of the 8051, called the 8051E, that brought out the internal address and data buses and the control signals needed to develop an in-circuit emulator.

In addition, the 8051’s basic chip layout was designed from the beginning to make it easy to drop a ROM or an EPROM into the space reserved for program memory. EPROM cells are much larger than ROM cells, so one side of the 8051’s physical layout had to be pushed out to make room for the EPROM, but this strategy proved very effective in getting all the parts out of the fab in short order.

The 8051 proved to be a monster hit for Intel. Unit shipments climbed to billions of units per year, and Intel sold the 8051 microcontrollers for several decades. In 1998, Wharton surveyed semiconductor vendors that had adopted the 8051 architecture and found five major vendors offering microcontrollers based on the 8051 design. In aggregate, these vendors offered more than 200 variants of the device. In 2006, Wharton attended the Embedded Systems Conference in California where he picked up a flyer from Keil Software, which offered software development tools for the 8051. That flyer listed more than 60 companies offering more than one thousand different 8051 variants. By any measure, the 8051 microcontroller was extremely successful, and it continued to sell well a quarter of a century after it was introduced.

In the 8051 panel’s oral history, Wharton explained the 8051’s longevity this way:

“In the embedded control marketplace, what we’re doing is controlling the world, interacting with the world, interacting with human beings, interacting with machinery, turning motors on and off, gasoline pumps, cash registers, keyboards, cell phones, digital cameras, and in these markets what you’re doing is a process that’s primarily controlling items, looking at inputs, making decisions, controlling outputs, but you’re doing it at real world speeds, and the real world hasn’t changed much. People type about the same speed as they did thirty years ago, so if something was adequate for typewriters thirty years ago it still works.”

John Wharton passed away in 2018 but his most successful creation lives on, a testament to what can happen when a product is defined by an experienced and observant applications engineer instead of by “expert” processor architects working in a factory. You’ll still find 8051 microcontrollers in current products ranging from packaged microcontroller chips in computer mice to microcontroller IP cores integrated into Bluetooth chips. Many real-world microcontroller applications don’t require any more capability from a microcontroller than what’s available from an 8051. They didn’t in 1980, and they still don’t today.

If you have an 8051 story, please feel free to write about it in the comments below. I am sure there are hundreds of stories waiting to be told.

 

References

Intel 8051 Microprocessor Oral History Panel, Computer history Museum, September 16, 2008.

3 thoughts on “A History of Early Microcontrollers, Part 8: The Intel 8051”

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

Libby's Lab

Libby's Lab - Scopes Out Littelfuse's SRP1 Solid State Relays

Sponsored by Mouser Electronics and Littelfuse

In this episode of Libby's Lab, Libby and Demo investigate quiet, reliable SRP1 solid state relays from Littelfuse availavble on Mouser.com. These multi-purpose relays give engineers a reliable, high-endurance alternative to mechanical relays that provide silent operation and superior uptime.

Click here for more information about Littelfuse SRP1 High-Endurance Solid-State Relays

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

MCX Enablement: MCUXpresso Ecosystem
In this episode of Chalk Talk, Kyle Dando from NXP and Amelia Dalton discuss the multitude of benefits of the NXP’s MCUXpresso Ecosystem. They also explore how the Visual Studio Code, Application Code Hub and improved software delivery enhance MCX microcontroller development and how you can get started using the MCUXpresso ecosystem for your  next design. 
Nov 6, 2024
30,115 views