feature article
Subscribe Now

Nuvia: Designed for the One Percenters

Secretive CPU Startup Aims to Power Massive Datacenters

“I’d love to be incredibly wealthy for no reason at all.” – Johnny Rotten

Among sports car aficionados, a “Super Seven” is a 1960s-era Lotus: light, fast, nimble, and characteristically fragile. Marvel superhero Wolverine drives one; the unnamed protagonist in The Prisoner famously had one, too. 

This is not one of those. 

To computer heavyweights, the “Super Seven” means Alibaba, Amazon, Baidu, Facebook, Google, Microsoft and Tencent. They’re the biggest of the big datacenter users – companies so large that they design their own computers, right down to the custom microprocessors. They’re the equivalent of a computing black hole. They have their own gravity and suck in everything around them. The datacenter universe revolves around them. 

So, it’s no surprise that processor vendors (Intel, AMD, et al.) focus their high-end development efforts on the needs of the Super Seven. Do you need AI acceleration? Right away, sir, we’ll get our best engineers right on that. Better multithreading performance? As you wish. 

But it’s tough to compete with your own customers, especially when those customers are bigger, wealthier, and more attuned to their performance needs than you are. Google’s in-house Tensor Processing Units, for example, are exactly what Google wants them to be. Plus, Google doesn’t necessarily need those chips to be profitable. The company makes its money elsewhere, not by selling silicon. That’s frustrating for commercial chipmakers, who yearn for the profit margins and prestige of supplying a top-tier datacenter, yet also need to make a steady profit on chips that fit with the rest of its product line. 

In between these two colliding superpowers we have an itty, bitty little startup called Nuvia. The Silicon Valley company numbers just 60 people (for now) and hopes to position itself smack in the middle of this fray by creating ultra-high-end processors aimed directly at the Super Seven. 

The first Nuvia chips are still years away from shipping, but the company isn’t hurting for audacious goals or brand-name talent. The CEO and heads of engineering are all ex-Apple and Google chip designers, the marketing VP comes from Intel, and Nuvia just scooped up a large number of CPU designers in Austin following Samsung’s recent layoff there. The company reportedly has $53 million in venture backing, including an unspecified investment from Dell Technologies Capital. Yes, the financial arm of the company that makes a whole lot of x86-based servers. 

(Nuvia is also under a bit of a legal cloud, as Apple has sued one of the founders for taking a little too much of his previous employer’s IP with him on the way out the door.) 

Will Nuvia’s processor be an x86 clone? Probably not, although the company isn’t saying. Marketing VP Jon Carvill describes their CPU as a “clean sheet design, unencumbered by legacy infrastructure. It’s a custom in-house CPU design and we’re not using a licensed CPU core.” That suggests that Nuvia is creating its own custom design, not just tweaking a familiar ARM, MIPS, x86, RISC-V, or other familiar architecture. 

Won’t that present compatibility problems? After all, many a clever CPU design has faltered because there was simply no software for it, while lesser CPUs have succeeded through an abundance of legacy code. “These customers have enormous software development teams,” says Carvill. Amazon, Google, and the rest already write their own code, and they have no problem rewriting it again if it means gaining a significant price/performance advantage. Selling to the Super Seven isn’t like selling in the merchant market. Third-party ecosystems don’t apply. 

But Carvill’s statement about not using a licensed CPU core doesn’t necessarily mean the company isn’t using a licensed CPU architecture. Apple, Google, and Samsung all designed custom ARM implementations under license from ARM. Those companies weren’t using ARM’s hardware designs – the cores – but they were using the company’s ISA. It’s possible that Nuvia’s processor will be compatible with an existing CPU family, even if the hardware implementation is unique. 

Carvill goes on to say, “There will be ISA compatibility for backward compatibility.” Such compatibility might be more for convenience than for performance, however. In other words, the Nuvia processor might be able to run ARM (or whatever) code in addition to its “real” instruction set, which is likely to be tuned for hyper-scale datacenter applications. The “backward compatibility” he describes would be helpful for bringing up legacy code while the programmers work on a native Nuvia implementation. 

This wouldn’t be the first time that a familiar CPU architecture rode sidecar on a custom CPU. Esperanto, for example, uses the RISC-V architecture as a sort of underlying chassis supporting the chip’s “real” instruction set, which is focused on AI. Similarly, Wave Computing builds its processor on top of MIPS (which it now owns), even though MIPS compatibility isn’t the real goal. 

The messaging from Nuvia is that the company is going after ultimate performance – no half measures. “We’re not scaling up from a mobile world,” says Carvill, an apparent swipe at ARM’s humble beginnings. Nuvia’s processor will be a beast, and any software compatibility with x86, ARM, PowerPC, or whatever will likely be incidental. It must be liberating to design an all-new CPU with ultimate performance in mind, while concerns about power consumption, cost, and software compatibility take a back seat. 

The top 1% live differently. Oprah Winfrey flies in English muffins from Napa Valley. Jeff Bezos probably has a Zen garden decorated with moon rocks. The datacenter Super Seven do things their own way because they can, including writing their own code and developing their own hardware. Nuvia is going after that market. Small, but huge. And extremely rich. Good luck to ’em.

Leave a Reply

featured blogs
Mar 17, 2023
Hello, I'm Julio Mendez, a CFD Scientist currently working at Corrdesa and using CFD to study electrochemical applications. My journey in CFD started in 2007 when I was looking for a topic for my undergraduate thesis at La Universidad del Zulia in my hometown in Venezuela. I ...
Mar 17, 2023
We're advancing development of chiplet-based multi-die systems with a successful UCIe PHY IP tape-out on TSMC's N3E semiconductor manufacturing process. The post Synopsys Accelerates Multi-Die System Designs With Successful UCIe PHY IP Tape-Out on TSMC N3E Process appeared f...
Mar 10, 2023
A proven guide to enable project managers to successfully take over ongoing projects and get the work done!...

featured video

Level Up Your Knowledge!

Sponsored by Mouser Electronics

Feeling behind in the game? Mouser's newsletter and technical resource subscriptions will ensure that your skills are next level! Set your preferences and customize your subscription to power up your knowledge today!

Click here for more information

featured chalk talk

Machine Learning at the Edge: Applications and Challenges
Machine learning at the TinyEdge is the way of the future but how we incorporate machine learning into our designs can take a variety of different forms. In this episode of Chalk Talk, Amelia chats with Dan Kozin from Silicon Labs about how you can add machine learning to your next design. They investigate what machine learning workflows look like, what machine learning tools you can utilize and the key challenges you will encounter as a machine learning developer.
Oct 20, 2022
19,799 views