feature article
Subscribe Now

A Brief History of the Single-Chip DSP, Part I

The Birth of Single-Chip DSPs Required a Three-Decade Gestation Period

DSP dates back to the very beginnings of the digital age, perhaps even a little bit before. If the construction of the first digital computer, ENIAC, in 1946, marks the beginning of the digital age in 1946, then DSP popped up a scant two years later. The IEEE published a monograph in 1998 titled “Fifty Years of Signal Processing: The IEEE Signal Processing Society and its Technologies 1948-1998,” which marks the start of the DSP age in 1948 by calling it the DSP annus mirabilis. That’s the year that Claude Shannon at Bell Telephone Laboratories published his landmark paper titled “A Mathematical Theory of Communication,” that carved in stone the relationship between achievable bit rate, channel bandwidth, and signal-to-noise ratio.

It’s also the year that Shannon, Bernard M. Oliver, and John R. Pierce – all at Bell Labs – published “The Philosophy of PCM,” documenting the practical nature of pulse code modulation and putting the stamp of practicality on PCM, first envisioned by Alec Reeves in 1937. (Bernard Oliver is perhaps better known in wider circles as Barney Oliver, the brilliant man who founded HP Labs in 1966, but that’s a different story entirely.) Shannon, Oliver, and Pierce were documenting some of the PCM concepts used to build the top secret SIGSALY secure speech system, a room-sized, 50-ton behemoth that encoded and encrypted the most important speech communications for the Allied forces during World War II.

Coincidentally, Bell Labs announced the development of the transistor on June 30, 1948, the same year it published the two landmark papers that sparked the DSP revolution. (The actual development of the transistor occurred the year before.) The transistor and solid-state electronics would be needed to transform the concepts in the papers published by Shannon, Oliver, and Pierce into practical technologies inexpensive enough to change the world of electronics, so 1948 was truly DSP’s annus mirabilis.

After 1948, not much happened with DSP technology for a very long time. Digital electronics was too nascent a field for DSP to become practical, at least not for real-time signal processing. During that period, a lot of DSP involved manual entry of numbers into Friedan and Marchant mechanical calculators, which was wildly impractical for audio or video communications. The budding world of DSP awaited a critical development. Actually, several critical developments.

This is the story of how DSP and single-chip DSPs managed to take over the entire world of signal processing. It parallels the history of digital electronics itself, spanning the development of integrated circuits (ICs), microprocessors, DSPs, and FPGAs. Spoiler alert: FPGAs win big in the end.

A Few Shaky Steps

The first critical development required to make DSP practical was the invention of the IC. Nearly simultaneously, Jack Kilby at Texas Instruments (TI) and Robert Noyce at Fairchild Semiconductor envisioned two wildly different ways to build the first integrated circuits. Kilby at TI filed for a patent first, in February 1959. Kilby had envisioned building multiple electronic components onto one bar of silicon and then using small gold bond wires to hook them together. He actually did build such a circuit in 1958 before filing for the patent. However, Kilby’s intricate and terribly manual assembly process was completely impractical and unlikely to scale up for commercial volume production.

Noyce’s idea, developed early in 1959, was to use photolithography, which Fairchild was already using to make silicon transistors, to image multiple electronic components on one die and then interconnect the components with a metal interconnect layer using the same photolithographic techniques. He left the details to Jean Hoerni, who developed the planar process that’s been used to make ICs ever since. Noyce and Fairchild filed for patents on these ideas later than Kilby, but still in 1959.

A practical manufacturing method for making ICs was only the first of many critical developments needed. Early digital ICs were far too primitive and incorporated far too few transistors to seriously consider using them for practical DSP. That’s because DSP involves an extremely esoteric concept called math. In particular, you need two critical mathematical operators – multiplication and addition – and you need to use lots and lots of these operations to perform DSP. Some of us became digital engineers so we could forget all about math. Not so with DSP engineering. When working with DSP, there’s no escaping the math.

While the electronics world was awaiting sufficient semiconductor technology advancement to make DSP a practical technology, the rest of the world couldn’t wait. The Bell System needed to develop methods to cram more voice capacity through its immense installed base of wires, and PCM was clearly the first step. In addition, the military’s use of radar and sonar blossomed after World War II, and DSP was clearly the path to refining and improving the capabilities of those systems. Communications satellites, first envisioned in a paper written by Arthur C. Clarke in 1945, were going to need digital communications to punch through some horrendous signal-to-noise problems involved in sending signals to and receiving signals from earth orbit.

The World Was Ready, But The ICs Were Not.

While the DSP world waited for semiconductor technology to catch up, the signal-processing theoreticians did not. Binshu Atal and Manfred Schroeder at Bell Labs developed Adaptive Predictive Coding (APC) in 1967, making it possible to get moderately decent audio from a 4.8kbps bit stream.

Then Atal developed Linear Predictive Coding (LPC) for speech compression. Nearly simultaneously, Fumitada Imakura of Nagoya University and Shuzo Saito of NTT developed partial correlation (PARCOR) coding, which is a very similar algorithm. These new speech-processing algorithms naturally needed more computation – more multiplications and additions – making it increasingly apparent that specialized ICs would be needed to make DSP practical and cost-effective.

But speech running through bandwidth-limited telephone channels were not the only signals crying out for DSP. Radar and sonar signal-processing algorithms needed it too. Television signals, which are real bandwidth hogs, needed it. Every signal being generated and received could benefit from DSP, if only the technology were practical. If only it didn’t require racks and racks of circuit boards stuffed with the medium-scale ICs that TI and a host of other vendors were selling in the 1960s.

Intel’s introduction of the first commercial microprocessor, the 4004, in 1971, was the first hint of what was to come. The Intel 4004 microprocessor could certainly multiply and add, but it could add only four bits at a time, and multiplication was a multi-step instruction sequence. The silicon was willing, but the ALU and bit width were weak.

The First DSP Chips Didn’t Quite Cut It

TRW managed to create and market a 16×16-bit, single-chip digital multiplier – the MPY016H – in 1976, manufactured with a 1-micron bipolar process technology. The TRW MPY016H could multiply two 16-bit numbers to produce a 32-bit result in 45nsec (40nsec for the dash-1 part) but it couldn’t add. You needed to add extra ICs to attach an accumulator to the multiplier. In addition, you could not extract the 32-bit result in one operation. You got the result in two chunks through the IC’s 16-bit output port. So this product really wasn’t a DSP. It was just part of a DSP. In addition, with two 16-bit input ports and one 16-bit output port, the TRW MPY016H had to be packaged in a wide, 64-pin DIP. It ran on 5V, but it needed nearly an amp to power up. At 5 watts, it needed a bit of cooling as well.

AMI introduced the S2811 Signal Processing Peripheral in 1978. It was a DSP with a 12-bit hardware multiplier, a 16-bit ALU, and a 16-bit output, but it was not designed as a single-chip DSP. AMI designed the S2811 as a memory-mapped peripheral device for the 8-bit 6800 microprocessor, which AMI also manufactured as an alternate source to the microprocessor’s originator, Motorola Semiconductor. AMI’s version of the 6800 microprocessor was called the S6800.

The 6800 microprocessor configured and accessed the AMI S2811 through one small and three larger on-chip, multiport RAMs. Although announced in 1978, the AMI S2811 was based on a difficult-to-manufacture VMOS process technology that delayed its arrival by several years. By then, several single-chip DSPs had been announced; the 16-bit microprocessor generation had arrived with the introduction of the Intel 8088, the Zilog Z8000, and the Motorola 68000; and the market for 6800 microprocessor peripherals began to shrink rapidly. Consequently, the obsolete AMI S2811 never achieved commercial success.

The same year that AMI introduced the S2811 Signal Processing Peripheral, TI introduced consumers to a toy based on DSP, the battery-powered “Speak & Spell,” which implemented LPC as its core speech-encoding technology. The Speak & Spell toy incorporated a TI TMC0280 speech synthesizer chip, which implemented Binshu Atal’s LPC algorithm in hardware. The TO TMC0280 was essentially a dedicated DSP.

Although the semiconductor technology of the day limited the TI Speak & Spell’s vocabulary to 165 words, the toy’s sparse vocabulary was a giant technological leap for a child’s toy, even at the steep (for the time) $50 retail price. Although the TI TMC0280 was a specialized, dedicated speech DSP, its low cost and its ability to run for quite a while on a battery pointed the way to DSP ICs soon to come.

In February, 1979, Intel attempted to say “Yeah, we can do that” by announcing the Intel 2920 “Analog Signal Processor.” This oddball integrated DSP had a 9-bit ADC (8 bits plus sign) and a four-input analog multiplexer on the front end, a 9-bit DAC with an 8-channel analog sample-and-hold circuit and analog multiplexer on the back end, and a digital ALU in the middle capable of performing addition, subtraction, and absolute-value operations to produce 25-bit results. A lack of multiplication and division instructions forced the use of multiple-instruction sequences to perform these required DSP math operations. On the order of 12 instructions were required per multiplication operation, and 14 instructions were needed for a divide operation. Each Intel 2920 instruction needed about half a microsecond to execute, so multiplication and division operations took microseconds to execute.

The Intel 2920 was intended for signal-filtering applications, but its slow execution speed, limited data path, unique instruction set, lack of a hardware multiplier, limited analog input and output voltage range, and other severe limitations doomed the IC to commercial failure. Consequently, few people remember the Intel 2920, but it too was a harbinger of DSPs to come.

As the 1970s ended, the world was clearly ready, hungry even, for real single-chip DSPs. Thanks to the theoreticians, the algorithms were developed and ready. Many signal-processing applications were begging for capable DSP silicon. All that remained was to develop the chip designs and the process technologies that could support the requirements. AMI, AT&T, Intel, Matsushita, Motorola, NEC, TI, Analog Devices, and others were all working feverishly on the problem. An explosion of DSP chips was imminent.

To be continued…

4 thoughts on “A Brief History of the Single-Chip DSP, Part I”

  1. Thanks for this fascinating and well-written 1st installment; I’m looking forward to the rest! I had no idea that Shannon, in addition to his many better-known accomplishments (I happen to be doing an information entropy calculation of my own right now), was also involved in formulating PCM. Just one of several fun discoveries for me here.

    1. Very glad to be of service, TomLoredo. I learn a lot myself in researching these articles. The Internet is such a great knowledge source, you can discover just about anything including that extra bit of info on Shannon. –Steve

  2. I am happy to add to this geat written article the well documented DSP application of the MP944, produced by AMI, for Garrett AiResearch, for the F-14 Tomcat. First chips produced and working in March 1970. This was a 20-bit parallel processor, with a 20-bit Multiple co-processor and a 20-bit Divide co-processor. Exact same technology as the 4004 two years later but upgraded by design for military application. FirstMicroprocessor.com details this excellent application. Adding to its success was that it was also dual-redundant and self-tested in real time. Certainly a leader for microchip DSP’s.
    Adding to your AMI history in 1974 AMI fired its entire microprocessor design group because AMI marketing decided “there was no future in microprocessors”. It’s no wonder its later efforts were filled with failure. They had the design and the technology but no future thinking.

    1. Hi zajacik7 and thanks for the comment. I’m aware of the AiResearch processor for the F-14. That’s why I usually call the 4004 the first “commercial” microprocessor as it was generally available on the market as opposed to the AiResearch 6-chip microprocessor. It almost wasn’t true of the 4004 either, because it was a proprietary chip sold only to Busicom, until Robert Noyce renegotiated the deal. AMI played another huge role in making the processor chip set (along with Mostek) for HP’s Model 35, the world’s first handheld scientific calculator, introduced in 1972. Again, not a commercial processor, but a hugely important one. I experienced huge equipment lust when the HP-35 was introduced in my Junior year at college, and bought one the next year when HP dropped the price by $100. Sad to learn of AMI’s short-sighted view of the microprocessor. They could have been a contender. I’ll be writing a lot more about the 4004 in November. There’s a 50th anniversary coming up. –Steve

Leave a Reply

featured blogs
Oct 24, 2024
This blog describes how much memory WiFi IoT devices actually need, and how our SiWx917M Wi-Fi 6 SoCs respond to IoT developers' call for more memory....
Nov 1, 2024
Self-forming mesh networking capability is a fundamental requirement for the Firefly project, but Arduino drivers don't exist (sad face)...

featured chalk talk

Vector Funnel Methodology for Power Analysis from Emulation to RTL to Signoff
Sponsored by Synopsys
The shift left methodology can help lower power throughout the electronic design cycle. In this episode of Chalk Talk, William Ruby from Synopsys and Amelia Dalton explore the biggest energy efficiency design challenges facing engineers today, how Synopsys can help solve a variety of energy efficiency design challenges and how the shift left methodology can enable consistent power efficiency and power reduction.
Jul 29, 2024
61,807 views