feature article
Subscribe Now

Nuvia: Designed for the One Percenters

Secretive CPU Startup Aims to Power Massive Datacenters

“I’d love to be incredibly wealthy for no reason at all.” – Johnny Rotten

Among sports car aficionados, a “Super Seven” is a 1960s-era Lotus: light, fast, nimble, and characteristically fragile. Marvel superhero Wolverine drives one; the unnamed protagonist in The Prisoner famously had one, too. 

This is not one of those. 

To computer heavyweights, the “Super Seven” means Alibaba, Amazon, Baidu, Facebook, Google, Microsoft and Tencent. They’re the biggest of the big datacenter users – companies so large that they design their own computers, right down to the custom microprocessors. They’re the equivalent of a computing black hole. They have their own gravity and suck in everything around them. The datacenter universe revolves around them. 

So, it’s no surprise that processor vendors (Intel, AMD, et al.) focus their high-end development efforts on the needs of the Super Seven. Do you need AI acceleration? Right away, sir, we’ll get our best engineers right on that. Better multithreading performance? As you wish. 

But it’s tough to compete with your own customers, especially when those customers are bigger, wealthier, and more attuned to their performance needs than you are. Google’s in-house Tensor Processing Units, for example, are exactly what Google wants them to be. Plus, Google doesn’t necessarily need those chips to be profitable. The company makes its money elsewhere, not by selling silicon. That’s frustrating for commercial chipmakers, who yearn for the profit margins and prestige of supplying a top-tier datacenter, yet also need to make a steady profit on chips that fit with the rest of its product line. 

In between these two colliding superpowers we have an itty, bitty little startup called Nuvia. The Silicon Valley company numbers just 60 people (for now) and hopes to position itself smack in the middle of this fray by creating ultra-high-end processors aimed directly at the Super Seven. 

The first Nuvia chips are still years away from shipping, but the company isn’t hurting for audacious goals or brand-name talent. The CEO and heads of engineering are all ex-Apple and Google chip designers, the marketing VP comes from Intel, and Nuvia just scooped up a large number of CPU designers in Austin following Samsung’s recent layoff there. The company reportedly has $53 million in venture backing, including an unspecified investment from Dell Technologies Capital. Yes, the financial arm of the company that makes a whole lot of x86-based servers. 

(Nuvia is also under a bit of a legal cloud, as Apple has sued one of the founders for taking a little too much of his previous employer’s IP with him on the way out the door.) 

Will Nuvia’s processor be an x86 clone? Probably not, although the company isn’t saying. Marketing VP Jon Carvill describes their CPU as a “clean sheet design, unencumbered by legacy infrastructure. It’s a custom in-house CPU design and we’re not using a licensed CPU core.” That suggests that Nuvia is creating its own custom design, not just tweaking a familiar ARM, MIPS, x86, RISC-V, or other familiar architecture. 

Won’t that present compatibility problems? After all, many a clever CPU design has faltered because there was simply no software for it, while lesser CPUs have succeeded through an abundance of legacy code. “These customers have enormous software development teams,” says Carvill. Amazon, Google, and the rest already write their own code, and they have no problem rewriting it again if it means gaining a significant price/performance advantage. Selling to the Super Seven isn’t like selling in the merchant market. Third-party ecosystems don’t apply. 

But Carvill’s statement about not using a licensed CPU core doesn’t necessarily mean the company isn’t using a licensed CPU architecture. Apple, Google, and Samsung all designed custom ARM implementations under license from ARM. Those companies weren’t using ARM’s hardware designs – the cores – but they were using the company’s ISA. It’s possible that Nuvia’s processor will be compatible with an existing CPU family, even if the hardware implementation is unique. 

Carvill goes on to say, “There will be ISA compatibility for backward compatibility.” Such compatibility might be more for convenience than for performance, however. In other words, the Nuvia processor might be able to run ARM (or whatever) code in addition to its “real” instruction set, which is likely to be tuned for hyper-scale datacenter applications. The “backward compatibility” he describes would be helpful for bringing up legacy code while the programmers work on a native Nuvia implementation. 

This wouldn’t be the first time that a familiar CPU architecture rode sidecar on a custom CPU. Esperanto, for example, uses the RISC-V architecture as a sort of underlying chassis supporting the chip’s “real” instruction set, which is focused on AI. Similarly, Wave Computing builds its processor on top of MIPS (which it now owns), even though MIPS compatibility isn’t the real goal. 

The messaging from Nuvia is that the company is going after ultimate performance – no half measures. “We’re not scaling up from a mobile world,” says Carvill, an apparent swipe at ARM’s humble beginnings. Nuvia’s processor will be a beast, and any software compatibility with x86, ARM, PowerPC, or whatever will likely be incidental. It must be liberating to design an all-new CPU with ultimate performance in mind, while concerns about power consumption, cost, and software compatibility take a back seat. 

The top 1% live differently. Oprah Winfrey flies in English muffins from Napa Valley. Jeff Bezos probably has a Zen garden decorated with moon rocks. The datacenter Super Seven do things their own way because they can, including writing their own code and developing their own hardware. Nuvia is going after that market. Small, but huge. And extremely rich. Good luck to ’em.

Leave a Reply

featured blogs
Dec 1, 2020
If you'€™d asked me at the beginning of 2020 as to the chances of my replicating an 1820 Welsh dresser, I would have said '€œzero,'€ which just goes to show how little I know....
Dec 1, 2020
More package designers these days, with the increasing component counts and more complicated electrical constraints, are shifting to using a front-end schematic capture tool. As with IC and PCB... [[ Click on the title to access the full blog on the Cadence Community site. ]...
Dec 1, 2020
UCLA’s Maxx Tepper gives us a brief overview of the Ocean High-Throughput processor to be used in the upgrade of the real-time event selection system of the CMS experiment at the CERN LHC (Large Hadron Collider). The board incorporates Samtec FireFly'„¢ optical cable ...
Nov 25, 2020
[From the last episode: We looked at what it takes to generate data that can be used to train machine-learning .] We take a break from learning how IoT technology works for one of our occasional posts on how IoT technology is used. In this case, we look at trucking fleet mana...

featured video

Introduction to the fundamental technologies of power density

Sponsored by Texas Instruments

The need for power density is clear, but what are the critical components that enable higher power density? In this overview video, we will provide a deeper understanding of the fundamental principles of high-power-density designs, and demonstrate how partnering with TI, and our advanced technological capabilities can help improve your efforts to achieve those high-power-density figures.

featured paper

Top 9 design questions about digital isolators

Sponsored by Texas Instruments

Looking for more information about digital isolators? We’re here to help. Based on TI E2E™ support forum feedback, we compiled a list of the most frequently asked questions about digital isolator design challenges. This article covers questions such as, “What is the logic state of a digital isolator with no input signal?”, and “Can you leave unused channel pins on a digital isolator floating?”

Click here to download the whitepaper

Featured Chalk Talk

Accelerate HD Ultra-Dense Multi-Row Mezzanine Strips

Sponsored by Mouser Electronics and Samtec

Embedded applications are putting huge new demands on small connectors. Size, weight, and power constraints are combining with new signal integrity challenges due to high-speed interfaces and high-density connections, putting a crunch on connectors for embedded design. In this episode of Chalk Talk, Amelia Dalton chats with Matthew Burns of Samtec about the new generation of high-performance connectors for embedded design.

More information about Samtec AcceleRate® HD Ultra-Dense Mezzanine Strips: