feature article
Subscribe Now

SoC Everywhere

Is the Future All One Thing?

Sometimes, while wrapped up in the day-to-day minutia of technology trends, we can lose sight of the big, slow movements. Underneath the fast-paced, frenetic world of next-node Moore’s Law chaos are some giant trendline tectonic plates – slowly sliding, shifting along fault lines that are barely visible in our normal tech lives.

Let’s fire up our future-facing seismometers and see what electronic bastions are poised to slide off into the ocean when the next “big one” hits. 

For the past thirty years, there has been extreme diversity in chips and in chip makers. We have processor companies making processors, of course; memory companies, microcontroller companies, FPGA companies, analog companies, RF companies, interface companies – every specialized type of chip hosts a mini-market of semiconductor specialists, competing for points of market share in their own little tightly-walled technology arena. 

Intel squares off against the likes of AMD. Xilinx and Altera feud like the Hatfields and McCoys, Linear Technology, Texas Instruments, and ADI spar with each other in the analog world. Samsung, Micron, and others dominate the memory game. Each of these contests exists almost in a bubble, oblivious to the gyrations in the adjoining market spaces. 

Deep down below the surface, however, Moore’s Law pushes the forces of integration and consolidation forward, gradually stressing the landscape above. The poster child for this movement is the never-more-appropriately-named “System on Chip” (SoC). We started calling chips SoCs before we had any right to. As soon as we could put a processor and – just about anything else – on a single chip, we called it a “System.”

Now, however, SoCs are starting to actually deserve the label. SoCs from Xilinx and Altera can have numerous 64-bit applications processors, graphics processors, MPUs, memory, FPGA fabric, non-volatile storage, high-speed IO, and even a little analog – all on one device. 3D packaging techniques promise to make this type of integration go even farther – with the potential to put silicon from completely different processes on the same interposer. It’s not inconceivable that we’d have RF, analog, heterogeneous processors, digital logic, memory, non-volatile storage, high-speed IO, and even MEMS – all in one package or on one silicon interposer.

If you look at the various specialized silicon vendors, you’ll start to notice that – at some level – almost all of them are now producing SoCs. We may be heading for a time when there is only an “SoC” market, rather than a fragmented collection of specialized silicon vendors. What we might see, then, is a re-partitioning of the market – into the application areas served, rather than the type of chip architecture being delivered. Automotive systems engineers would get their SoCs from one vendor, telecommunications and networking from another, and cloud computing from yet another. While the silicon will still be differentiated, the primary way that companies will need to compete will be on the things surrounding the silicon – support, reference designs, development boards, hardware and software development tools, IP, and so forth. 

If we look at the trajectories of the various elements, we can divine even more. For example, random logic is becoming essentially free. You can put multiple applications processors, MCUs, graphics engines, DSP processors, and other types of specialized processing elements on your chip for virtually no incremental cost in silicon area. So – why wouldn’t everybody throw them in? Even FPGA fabric is headed down the curve toward zero cost in silicon area. You can put more FPGA fabric than most applications could use on a chip for very little incremental cost. 

Memory is a different story. Today’s applications have a voracious appetite for memory, and it looks like it will be a long time (or never) before “more memory than you will probably use” will be small and cheap enough to throw onto every chip. As Moore’s Law progresses, and we put as much of the non-memory gunk on our chip as anyone could want, we’ll probably use the rest of the space for memory.

By the same token, IO does not follow the curve of Moore’s Law. The amount we spend to route a single signal from our chip to a board hasn’t dropped all that much, and our appetite for more IO pins has grown rapidly, although not nearly at a Moore’s Law pace (thank goodness, or we’d have devices with a billion pins by now). The pace of increased integration helps offset the pin glut, however, as the more signals we can route between connected blocks inside our chip or module, the fewer we have to bring out to the real world on our board. 

Following these three trends, we can visualize ending up in a world where processing and logic are basically free and ubiquitous, analog is present in sufficient quantities on most devices, memory is a commodity, and IO pins are at a premium. Even using the rules of that world, we could end up with far fewer different chips than we have today. A small number of different SoCs could be adapted to solve a huge variety of problems in a wide range of applications. The people buying those SoCs would be much less interested in the hardware and its capabilities than in the software, support, and application-readiness of the ecosystem surrounding the SOCs.

If the world of chips may become less diverse and specialized, the world of applications continues to expand. That means that opportunities for “silicon” companies could be much more related to their ability to understand and serve particular applications than to their ability to build differentiated chips. Look for silicon vendors to spend more time understanding your actual challenges and less time bragging about how their max DSP performance is umpty-gazillion teraFLOPs and their power consumption is “3x lower” than their nearest competitor.

In this world, software IP will also take on a very important role. If the chips are not particularly differentiated, and the development tools are fairly similar from different vendors, the way to your heart will be with software stacks and reference designs that do a lot of your grunt work for you. In that world, you may just have to grab the dev kit and start work right away on your magic secret software sauce. The rest may already be done.

Top image: kristineoplado

9 thoughts on “SoC Everywhere”

  1. Actually it’s all software (or will be). Do people design much with transistors these days? – no, it’s RTL or above (for the digital at least), and that’s really just fine-grained parallel software (although it’s a bit like working in PHP). Once all the algorithms for the analog pieces get abstracted out it’ll be SDX (software defined everything).

    Would hate to be working where they don’t allow software patents…

  2. Pingback: GVK Biosciences
  3. Pingback: slave
  4. Pingback: DMPK Studies
  5. Pingback: juegos de friv
  6. Pingback: Taruhan Olahraga
  7. Pingback: taruhan bola
  8. Pingback: iraq Colarts

Leave a Reply

featured blogs
Aug 1, 2021
https://youtu.be/I0AYf5V_irg Made in Long Ridge Open Space Preserve (camera Carey Guo) Monday: HOT CHIPS 2021 Preview Tuesday: Designed with Cadence Video Series Wednesday: July Update Thursday:... [[ Click on the title to access the full blog on the Cadence Community site. ...
Jul 30, 2021
You can't attack what you can't see, and cloaking technology for devices on Ethernet LANs is merely one of many protection layers implemented in Q-Net Security's Q-Box to protect networked devices and transaction between these devices from cyberattacks. Other security technol...
Jul 29, 2021
Learn why SoC emulation is the next frontier for power system optimization, helping chip designers shift power verification left in the SoC design flow. The post Why Wait Days for Results? The Next Frontier for Power Verification appeared first on From Silicon To Software....
Jul 28, 2021
Here's a sticky problem. What if the entire Earth was instantaneously replaced with an equal volume of closely packed, but uncompressed blueberries?...

featured video

DesignWare Controller and PHY IP for PCIe 6.0

Sponsored by Synopsys

See a demo of Synopsys’ complete IP solution for PCIe 6.0 technology showing the controller operating at 64GT/s in FLIT mode and the PAM-4 PHY in 5-nm process achieving two orders of magnitude better BER with 32dB PCIe channel.

Click here for more information about DesignWare IP for PCI Express (PCIe) 6.0

featured paper

Configure the charge and discharge current separately in a reversible buck/boost regulator

Sponsored by Maxim Integrated

The design of a front-end converter can be made less complicated when minimal extra current overhead is required for charging the supercapacitor. This application note explains how to configure the reversible buck/boost converter to achieve a lighter impact on the system during the charging phase. Setting the charge current requirement to the minimum amount keeps the discharge current availability intact.

Click to read more

featured chalk talk

IEC 62368-1 Overvoltage Requirements

Sponsored by Mouser Electronics and Littelfuse

Over-voltage protection is an often neglected and misunderstood part of system design. But often, otherwise well-engineered devices are brought down by over-voltage events. In this episode of Chalk Talk, Amelia Dalton chats with Todd Phillips of Littelfuse about the new IEC 623689-1 standard, what tests are included in the standard, and how the standard allows for greater safety and design flexibility.

Click here for more information about Littelfuse IEC 62368-1 Products