feature article
Subscribe Now

Differentiation versus Diversity

Tilera’s Acquisition Means One Less CPU Company

There once was a time when every company had its own unique CPU architecture. Then there was a time when pretty much everyone used the same CPU architecture. Guess which era we’re living in now.

Actually, we’ve experienced both of those extremes multiple times. We have the makings of an industry cycle here. Really early computer companies (Burroughs, National Cash Register, Amdahl, International Business Machines, Data General, Digital Equipment Corporation, etc.) each invented and supported its own proprietary computer architecture. Each processor was implemented in discrete logic and occupied an entire printed-circuit board. Probably several boards, in fact. Software had no commonality at all. IBM machines couldn’t run any DEC software, which didn’t understand NCR code, which was incompatible with DG equipment, and so on.

Much later, we had homogeneous machines based on de facto standards: Think IBM PC and the x86 processor family. PCs were – and pretty much still are – interchangeable. Every PC runs the same software as every other PC.

We had almost the same thing with engineering workstations in the 1980s. Sun Microsystems made a point of using standard, nonproprietary, commercial devices. Where Daisy, Xerox, Mentor and others used proprietary hardware and software, Sun built boxes around Motorola’s 68K microprocessors and standard Ethernet interfaces. It wasn’t quite a monoculture, but it was close.  

Then we went through the RISC boom, with lots of choices. That was followed by the inevitable bust: fewer choices. Graphics processors (GPUs) went nuts in the 1990s. Now we have just nVidia, ATI (AMD), and some Intel.

That wave was followed by a raft of gonzo network processors, most of which are no longer with us. Qualcomm, Broadcom, Marvell, and a few others rose to prominence; most of the others slipped under the surface. Processor innovation comes in waves, and waves have a habit of scrubbing the beaches clean.

The receding tide of processor diversity recently swept out Tilera, one of the salty barnacles clinging tightly to the networking pier. (End of tortured metaphor.) Tilera was interesting in part because of its massive CPU core count. The company’s Tile-GX chips currently boast up to 72 identical processor cores, with more promised. Each core is a full-on 64-bit processor, capable of running even Linux. Neighboring CPU cores can run multicore operating systems. Of course, the core-to-core interconnect fabric and the shared caching structure were just as important, and just as complex. In all, Tilera pulled off an impressive engineering feat.

But the company is due to be acquired by EZchip, where its CPU architecture will be absorbed into future EZchip parts. And although the acquisition price is somewhere in the high eight to low nine figures, it’s not clear that that’s a win. That amount barely covers the startup cash that the company raised during its growth phase. In other words, the investors will get their money back (maybe), but no more. In finance speak, there’s no multiple. The company is worth only what was put into it, ten years of effort notwithstanding.

What was Tilera’s problem – if indeed, there’s a problem at all? After all, getting acquired by a major player is generally considered a pretty good exit strategy, and it’s hard to look askance at a check with that many zeros on it. But it feels a bit hollow to me, as if someone had bought the furniture and fixtures but left the computers behind.

Tilera’s engineering was remarkable, and its performance looked impressive, too. It was one of only a handful of massively parallel processors that actually made it into the market, with real people using them in real products. So we have an existence proof of the concept. But as with so many innovative processors, it was too ambitious. It was too difficult to program, too difficult to model, and too different from what developers were used to. Yeah, you could get the chip to perform miracles, but you really had to want it.

That’s not a comfortable position for most programmers, nor for their bosses. It’s generally safer to use a “normal” chip based on ARM or MIPS or Power and tweak your software to provide some differentiation from all the other ARM-, MIPS-, and Power-based products. Those sorts of projects are well understood and (comparatively) easily managed. Launching a product based on an entirely new and massively parallel CPU architecture? That has “high risk” written all over it.

Moving forward, my suspicion is that EZchip will encapsulate Tilera’s technology in such a way that the scariness disappears. The on-chip mesh network is easily concealed; the processors less so. They’re more likely to become anonymous “accelerators” that aren’t directly visible to the programmer/developer. EZchip will likely develop its own in-house firmware layer to screen the CPU from curious eyes, while downplaying their provenance and architecture. A firmware interposer also allows the company to tinker with the CPU architecture without changing the interface that programmers see. Freescale has done a similar thing with its Power-to-ARM transition, adding a level of indirection that abstracts the processor.

So although Tilera’s parallel processor architecture will live on, it will operate behind a mask, like Japanese Noh actors. The industry will have gained some differentiation, but lost some diversity. 

One thought on “Differentiation versus Diversity”

  1. Hi Jim, nice to see your very positive comments about Tilera’s architecture and performance achievements. And we’ve got over 100 designs at companies like Cisco, Brocade, ZTE, Checkpoint who agree with that. But it’s worth correcting a couple statements:
    First, the Tilera processors are not at all hard to program… in fact, that is one of their strongest selling points. The programming model is completely aligned with programming any multi-threaded, multicore processor with coherent memory and running Linux. Consider that an Intel Ivy Bridge can have up to 15 cores and 30 threads with perhaps 60 threads in a dual-socket system, so the modern programmer already has to master programming for parallel execution. And the Tile programming tools are completely mainstream: C/C++, Java, gcc, Eclipse, gdb, etc. One of our Cisco customers stated that Tilera had the best multicore programming SW tools he had ever used.

    And as for the future, the synergy between EZchip and Tilera is tremendous and Tilera’s architecture is not going away at all. The current TILE-Gx family continues to attract new design wins, and the new processors on our roadmap will be leveraging the best of the technology that each company brought to the transaction. Rather than the ‘least common denominator’, I think you’ll see that our future processors are superior to what either company would have produced independently. Stay tuned… our customers are very excited about the direction we’re going.

Leave a Reply

featured blogs
Nov 30, 2023
No one wants to waste unnecessary time in the model creation phase when using a modeling software. Rather than expect users to spend time trawling for published data and tediously model equipment items one by one from scratch, modeling software tends to include pre-configured...
Nov 27, 2023
See how we're harnessing generative AI throughout our suite of EDA tools with Synopsys.AI Copilot, the world's first GenAI capability for chip design.The post Meet Synopsys.ai Copilot, Industry's First GenAI Capability for Chip Design appeared first on Chip Design....
Nov 6, 2023
Suffice it to say that everyone and everything in these images was shot in-camera underwater, and that the results truly are haunting....

featured video

Dramatically Improve PPA and Productivity with Generative AI

Sponsored by Cadence Design Systems

Discover how you can quickly optimize flows for many blocks concurrently and use that knowledge for your next design. The Cadence Cerebrus Intelligent Chip Explorer is a revolutionary, AI-driven, automated approach to chip design flow optimization. Block engineers specify the design goals, and generative AI features within Cadence Cerebrus Explorer will intelligently optimize the design to meet the power, performance, and area (PPA) goals in a completely automated way.

Click here for more information

featured webinar

Rapid Learning: Purpose-Built MCU Software Tools for Data-Driven Embedded IoT Systems

Sponsored by ITTIA

Are you developing an MCU application that captures data of all kinds (metrics, events, logs, traces, etc.)? Are you ready to reduce the difficulties and complications involved in developing an event- and data-centric embedded system? This webinar will quickly introduce you to excellent MCU-specific software options for developing your next-generation data-driven IoT systems. You will also learn how to recognize and overcome data management obstacles. Register today as seats are limited!

Register Now!

featured chalk talk

Battery-free IoT devices: Enabled by Infineon’s NFC Energy-Harvesting
Sponsored by Mouser Electronics and Infineon
Energy harvesting has become more popular than ever before for a wide range of IoT devices. In this episode of Chalk Talk, Amelia Dalton chats with Stathis Zafiriadis from Infineon about the details of Infineon’s NFC energy harvesting technology and how you can get started using this technology in your next IoT design. They discuss the connectivity and sensing capabilities of Infineon’s NAC1080 and NGC1081 NFC actuation controllers and the applications that would be a great fit for these innovative solutions.
Aug 17, 2023
12,735 views