feature article
Subscribe Now

What? MORE 8-Bit Microcontrollers?

With all the talk about 8-bit, 16-bit, 32-bit, and 64-bit processors that is constantly swirling around us, I’m not sure how many of today’s younger engineers are aware that the first commercial microprocessor, the Intel 4004, was a 4-bit machine (although, in hindsight — which is the one true science — the part number is a bit (no pun intended) of a giveaway).

As an aside, if you ever want to learn more about how the 4004 — and hence our modern world — came to be, I heartily recommend the columns that were penned by my friend Steve Leibson here on EEJournal: Will We Ever Really Know Who Invented the Microprocessor? and Which Was the First Microprocessor? and Say Happy 50th Birthday to the Microprocessor Part 1 and Part 2.

Microprocessors (µPs) are also referred microprocessor units (MPUs). Early MPUs contained only a central processing unit (CPU). Over time, other functions were added like cache memory, floating point units (FPUs), memory management units (MMUs), and so on. The key point is that — other than any cache and things like FPUs and MMUs — a microprocessor doesn’t contain any internal memory or peripherals. By comparison, microcontrollers (µCs), which are also referred to microcontroller units (MCUs), contain non-volatile memory like Flash, volatile memory like SRAM, peripherals like counters, timers, analog-to-digital converters (ADCs), and communications interfaces like UART, I2C, and SPI. Essentially, a microcontroller is a little standalone computer presented on a single silicon chip that contains its own program, which it starts executing as soon as it powers-up. This explains why microcontrollers appear in embedded systems, and why embedded systems appear all over the place. (You can read more about the difference between microprocessors and microcontrollers in my column What the FAQ are CPUs, MPUs, MCUs, and GPUs?).

The history of microcontrollers is as murky as that of microprocessors. Which was the first microcontroller? Was it a 4-bit device created for automobiles by the Japanese in the early 1970s? Or was it the 4-bit TMS 1000 created by TI engineers Gary Boone and Michael Cochran, which first saw the light of day in 1974? When it comes to 8-bit MCUs, was the Intel 8048 (a.k.a. MCS-48) the first on the scene in 1976? I don’t know. What I do know is that perhaps the most famous of the early 8-bit MCUs was the 8051 (a.k.a. MCS-51), whose instruction set architecture (ISA) was conceived by John H. Wharton, and which appeared on the market in 1980. It’s amazing to think that variations of the 8051 are still going strong to this day.

As an aside, John (RIP) once told me that when he was a young engineer working at Intel, he used to go out with his supervisor for lunch. One day they heard that there was going to be a lunchtime meeting about something or other. They weren’t sure what the focus of the meeting was to be, only that free sandwiches were to be served (ah, behold the power of a free sandwich). The meeting in question turned out to be the kick-off for the 8051, literally starting from the ground up with a blank sheet of paper (or a newly cleaned whiteboard, as the case might be). Following this meeting, stuffed with free food, John returned to his desk and sketched out what was to become the architecture (functional units, busses, etc.) and the ISA of the 8051. 

These days, there are myriad microcontrollers available to tickle our fancy. Two families that have really made their presence felt are PIC microcontrollers and AVR microcontrollers. The first 8-bit PIC (pronounced “pick”) was developed by General Instruments in 1975. I’m not sure of the nitty-gritty history here, but PICs are now the purview of Microchip Technology. Meanwhile, the original 8-bit AVR architecture was conceived by Alf-Egil Bogen and Vegard Wollan while they were students at the Norwegian Institute of Technology (NTH). This technology was subsequently acquired by Atmel, which released the first members of the AVR family in 1996. Atmel itself was subsequently acquired by Microchip Technology in 2016.

When I say that these microcontrollers have “made their presence felt,” is there any way by which we can quantify this claim? Well, by golly, I’m glad you asked, because I was just chatting with Microchip’s Greg Robinson and Brian Thorsen, where Greg is Vice President of Marketing for Microchip’s MCU8 business unit (MCU8 is their name for 8-bit MCUs) and Brian is Senior Public Relations Manager. As we see from the chart below, at the time of this writing, when it comes to 8-bit MCUs, Microchip has a 32% market share (its closest competitor, NXP, has 11%), which would certainly put a smile on my face if I were in charge of these little scamps at Microchip.

Worldwide 8-bit microcontroller market share from the Gartner 2021 Market Share Report (Image source: Microchip)

Greg told me that Microchip continues to innovate and propagate new parts into 8-bit space. In Q2 2022, for example, Microchip is introducing five new families boasting 65 devices flaunting a cornucopia of on-chip analog and other core-independent peripherals. 

In addition to traditional single-chip systems where a Microchip MCU is the only processor on board, there’s an increasing use of 8-bit processors in the role of system management ICs and co-processors — all spaces where characteristics like size, space, low-power, and longevity are important. A lot of this is driven by the fact that we are seeing a dramatic rise of distributed intelligence with respect to applications like IoT edge nodes, automotive safety, industrial control systems, medical electronics, and home electronics, to name but a few. Even state-of-the-art 5G systems can often benefit from offloading certain tasks to smaller 8-bit processors, freeing up the higher-level processors to work their magic and do what they do best.

Greg continued to say that, as weird as it may sound, a lot of 8-bit growth is being driven by 32-bit growth where the 32-bit processors are passing off things like human machine interface (HMI) functions and housekeeping tasks to the 8-bit processors. Also, that the 8-bit machines are seeing increasing use as co-processors, performing tasks like taking sensor readings and pre-processing this sensor data before passing it on to the higher-level processor.

One of the topics we touched on was the current supply chain problems. Prior to our conversation, I hadn’t realized that 95% of the 8-bit products Microchip ships are internally manufactured, and – in addition to controlling wafer fabs in Tempe, AZ; Gresham, OR; and Colorado Springs, CO – they also own their own assembly, manufacturing, and test facilities.

Having said this, there are still shortages because of the massive demand over the past 18 to 24 months ensuing from the perfect storm caused by the combination of trade wars and the worldwide coronavirus pandemic. Greg says that you can’t just flick a switch to boost production – Microchip’s President and CEO Ganesh Moorthy has said he expects shortages to extend to 2023 – but Microchip has committed to spending $1 billion over the next few years, which will allow the company to continue to bring out new products while expanding capacity to serve demand for existing devices.

New product introductions Q2 2022 (Image source: Microchip)

Before you ask, ADCC stands for “ADC Computation,” which is a hybrid of analog and digital functionality. The on-chip analog functions, which include 8-, 10-, and 12-bit ADCs, are easily configured using graphical tools. Additional options include ADCs with associated programmable gain amplifiers (PGAs), which saves having to use an external PGA, and ADCs with context/sequencing. Other functions include on-chip comparators, digital-to-analog converters (DACs), ramp generators, temperature sensors, voltage references, zero cross detects, and operational amplifiers (opamps).

Consider the opamp example presented below. The traditional approach is to use an external opamp (left). The advantages resulting from bringing the opamp on-chip (right) include saving space on the circuit board, reducing the bill of materials (BOM), and being able to change the gain and other characteristics in software on-the-fly under program control (this is useful if you have multiple signals you want to measure with each one needing different opamp parameters).

PIC and AVR microcontrollers with internal opamps (Image source: Microchip)

The concept behind core independent peripherals (CIPs) is that the peripherals can be performing tasks on their own while the core is taking a snooze or working on more important tasks. For example, a CIP could be taking readings from a sensor and then accumulating, averaging, and/or filtering the results while the core goes to sleep. Later, when the core wakes up, the peripheral can have its pre-processed data ready and waiting.

Using core independent peripherals to create custom peripherals
(Image source: Microchip)

Where things start to become really interesting is when CIPs are ganged together to create custom peripherals, or “super peripherals,” if you will. One great example of this is illustrated below. This involves an application that wishes to control a bunch of LEDs using a serial bus communication protocol.

Ganging CPIs together to create “super peripherals” or “super modules”
(Image source: Microchip)

Specifying “which LED” and “what color” involves a fairly complex signal and can require a substantial amount of data to be sent out. This would typically require a high-speed 32-bit MCU. However, by using a handful of CIP peripherals — timer, SPI, PWM, and a little bit of logic implemented using CLCs (configurable logic cells) – it’s possible to implement this algorithm on an 8-bit PIC microcontroller. (This same function can be implemented on an AVR using configurable custom logic (CCL) as opposed to the PIC’s CLCs.)  

The result is to allow the 8-bit MCU to drive the chain of LEDs at logic speed, which is much faster than instruction speed (that is, instructions running on the core), while freeing up the core to perform other tasks.

Having CIPs in general, and being able to gang them together in particular, opens the door to a vast range of deployment scenarios, allowing the peripherals to process all sorts of sensor data.

Common types of sensor outputs (Image source: Microchip)

Consider the example shown below, in which an 8-bit PIC or AVR microcontroller is being used to monitor the outputs from temperature, humidity, and vibration sensors. It may be that the signals from the temperature sensor require a higher gain than do those from the humidity sensor, and this can be achieved by swapping the gain of the on-chip opamp back and forth under program control.

Typical multi-sensor application(I mage source: Microchip)

Meanwhile, it may be that the MCU is running at 5V, while the vibration sensor – which communicates using I2C — requires only 1.8V. In this case, rather than employing an external voltage level shifter, the solution is to employ the MCU’s multi-voltage input/output (MVIO) capabilities.

The example above shows the combination of MVIO and I2C, but the MVIO can also be used with general-purpose input/outputs (GPIOs). In fact, this leads to another example, because an 8-bit PIC or AVR MCU running at 5V may be used to read the values from a sensor, thereby achieving a better resolution than is possible with a 3.3V MCU, and the PIC/AVR could then use its MVIO capability to communicate this data to a 3.3V 32-bit PIC32 SAM MCU, for example.

One thing that can baffle newcomers to the PIC/AVR party is the humongous number of different components that are available, each with different numbers of pins and different combinations of functions and peripherals. There are several ways to address this. In my case, I ask my friend Joe Farr, who is a walking encyclopedia when it comes to Microchip’s PIC and AVR MCUs. For those who don’t know Joe, there is a product selector guide on the Microchip website that allows the user to say, “I need this functionality” and be guided to the appropriate product. Alternatively, another route is available where users say, “I have this application in mind,” and the tool guides them not only to an appropriate part, but also to associated firmware and software and development tools.

Greg closed our conversation by saying something I found to be very interesting, which was that it’s not just that the size of the 8-bit MCU pie is growing, but all sorts of new applications are emerging, which is like having a whole new pie. As a result, he says, Microchip is very bullish on the 8-bit MCU market, which is great news for me because I love 8-bit MCUs. How about you? Do you have any thoughts you would care to share on any of this?

7 thoughts on “What? MORE 8-Bit Microcontrollers?”

    1. I vaguely remember heard of it, but I never used it myself. I do remember AMD’s AM2901 bit-slice processor — that was an interesting approach (was the Motorola one a bit-slice machine?)

  1. PIC was crap.
    Even mid-range PIC16 core is painfully limited ( PIC16F1527 and similar)
    Just one grief – both index registers are “shadowed”.
    But instead of having two sets and just fliping them on interrupt entry, they get COPIED.

    Which means that no matter what, one has to regenerate them on interrupt entry.
    Approach used is wasteful, slow and it additionally slows down interrupt routine.

    There are plenty of other gotchas.
    18FG series is nicer, but used to be much more expensive – one can have PIC32MM for that $$$,
    besides that it is mostly not 5V compatible.

    AVR OTOH seems to be pretty decent for a 8-bitter.

    Don’t know much about TI’s TMS430. Seems nice from afar.

    8051 was really bad. One would think Intel could come up with something better,
    WRT core as well as periphery. Their definition of a microcontroller was a CPU with one shitty timer, one UART, couple of I/O pins and perhaps bit of PROM.

    1. I don’t disagree with anything you say — your points are valid — but… for reasons we don’t need to go into here, I currently find myself using PICs for all sorts of things, and I’m finding them to be incredibly useful for what I need to used them for.

  2. I’ve got a 9-part early history of microcontrollers coming up later this month in EEJournal. Watch for Part 1 scheduled for November 14. It might answer some of the questions my colleague Max raised in this article, although as with any history, there’s usually plenty of room for argument.

Leave a Reply

featured blogs
Nov 12, 2024
The release of Matter 1.4 brings feature updates like long idle time, Matter-certified HRAP devices, improved ecosystem support, and new Matter device types....
Nov 7, 2024
I don't know about you, but I would LOVE to build one of those rock, paper, scissors-playing robots....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

Shift Left Block/Chip Design with Calibre
In this episode of Chalk Talk, Amelia Dalton and David Abercrombie from Siemens EDA explore the multitude of benefits that shifting left with Calibre can bring to chip and block design. They investigate how Calibre can impact DRC verification, early design error debug, and optimize the configuration and management of multiple jobs for run time improvement.
Jun 18, 2024
36,128 views