feature article
Subscribe Now

Nuvia: Designed for the One Percenters

Secretive CPU Startup Aims to Power Massive Datacenters

“I’d love to be incredibly wealthy for no reason at all.” – Johnny Rotten

Among sports car aficionados, a “Super Seven” is a 1960s-era Lotus: light, fast, nimble, and characteristically fragile. Marvel superhero Wolverine drives one; the unnamed protagonist in The Prisoner famously had one, too. 

This is not one of those. 

To computer heavyweights, the “Super Seven” means Alibaba, Amazon, Baidu, Facebook, Google, Microsoft and Tencent. They’re the biggest of the big datacenter users – companies so large that they design their own computers, right down to the custom microprocessors. They’re the equivalent of a computing black hole. They have their own gravity and suck in everything around them. The datacenter universe revolves around them. 

So, it’s no surprise that processor vendors (Intel, AMD, et al.) focus their high-end development efforts on the needs of the Super Seven. Do you need AI acceleration? Right away, sir, we’ll get our best engineers right on that. Better multithreading performance? As you wish. 

But it’s tough to compete with your own customers, especially when those customers are bigger, wealthier, and more attuned to their performance needs than you are. Google’s in-house Tensor Processing Units, for example, are exactly what Google wants them to be. Plus, Google doesn’t necessarily need those chips to be profitable. The company makes its money elsewhere, not by selling silicon. That’s frustrating for commercial chipmakers, who yearn for the profit margins and prestige of supplying a top-tier datacenter, yet also need to make a steady profit on chips that fit with the rest of its product line. 

In between these two colliding superpowers we have an itty, bitty little startup called Nuvia. The Silicon Valley company numbers just 60 people (for now) and hopes to position itself smack in the middle of this fray by creating ultra-high-end processors aimed directly at the Super Seven. 

The first Nuvia chips are still years away from shipping, but the company isn’t hurting for audacious goals or brand-name talent. The CEO and heads of engineering are all ex-Apple and Google chip designers, the marketing VP comes from Intel, and Nuvia just scooped up a large number of CPU designers in Austin following Samsung’s recent layoff there. The company reportedly has $53 million in venture backing, including an unspecified investment from Dell Technologies Capital. Yes, the financial arm of the company that makes a whole lot of x86-based servers. 

(Nuvia is also under a bit of a legal cloud, as Apple has sued one of the founders for taking a little too much of his previous employer’s IP with him on the way out the door.) 

Will Nuvia’s processor be an x86 clone? Probably not, although the company isn’t saying. Marketing VP Jon Carvill describes their CPU as a “clean sheet design, unencumbered by legacy infrastructure. It’s a custom in-house CPU design and we’re not using a licensed CPU core.” That suggests that Nuvia is creating its own custom design, not just tweaking a familiar ARM, MIPS, x86, RISC-V, or other familiar architecture. 

Won’t that present compatibility problems? After all, many a clever CPU design has faltered because there was simply no software for it, while lesser CPUs have succeeded through an abundance of legacy code. “These customers have enormous software development teams,” says Carvill. Amazon, Google, and the rest already write their own code, and they have no problem rewriting it again if it means gaining a significant price/performance advantage. Selling to the Super Seven isn’t like selling in the merchant market. Third-party ecosystems don’t apply. 

But Carvill’s statement about not using a licensed CPU core doesn’t necessarily mean the company isn’t using a licensed CPU architecture. Apple, Google, and Samsung all designed custom ARM implementations under license from ARM. Those companies weren’t using ARM’s hardware designs – the cores – but they were using the company’s ISA. It’s possible that Nuvia’s processor will be compatible with an existing CPU family, even if the hardware implementation is unique. 

Carvill goes on to say, “There will be ISA compatibility for backward compatibility.” Such compatibility might be more for convenience than for performance, however. In other words, the Nuvia processor might be able to run ARM (or whatever) code in addition to its “real” instruction set, which is likely to be tuned for hyper-scale datacenter applications. The “backward compatibility” he describes would be helpful for bringing up legacy code while the programmers work on a native Nuvia implementation. 

This wouldn’t be the first time that a familiar CPU architecture rode sidecar on a custom CPU. Esperanto, for example, uses the RISC-V architecture as a sort of underlying chassis supporting the chip’s “real” instruction set, which is focused on AI. Similarly, Wave Computing builds its processor on top of MIPS (which it now owns), even though MIPS compatibility isn’t the real goal. 

The messaging from Nuvia is that the company is going after ultimate performance – no half measures. “We’re not scaling up from a mobile world,” says Carvill, an apparent swipe at ARM’s humble beginnings. Nuvia’s processor will be a beast, and any software compatibility with x86, ARM, PowerPC, or whatever will likely be incidental. It must be liberating to design an all-new CPU with ultimate performance in mind, while concerns about power consumption, cost, and software compatibility take a back seat. 

The top 1% live differently. Oprah Winfrey flies in English muffins from Napa Valley. Jeff Bezos probably has a Zen garden decorated with moon rocks. The datacenter Super Seven do things their own way because they can, including writing their own code and developing their own hardware. Nuvia is going after that market. Small, but huge. And extremely rich. Good luck to ’em.

Leave a Reply

featured blogs
May 8, 2024
Learn how artificial intelligence of things (AIoT) applications at the edge rely on TSMC's N12e manufacturing processes and specialized semiconductor IP.The post How Synopsys IP and TSMC’s N12e Process are Driving AIoT appeared first on Chip Design....
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Altera® FPGAs and SoCs with FPGA AI Suite and OpenVINO™ Toolkit Drive Embedded/Edge AI/Machine Learning Applications

Sponsored by Intel

Describes the emerging use cases of FPGA-based AI inference in edge and custom AI applications, and software and hardware solutions for edge FPGA AI.

Click here to read more

featured chalk talk

From Sensor to Cloud:A Digi/SparkFun Solution
In this episode of Chalk Talk, Amelia Dalton, Mark Grierson from Digi, and Rob Reynolds from SparkFun Electronics explore how Digi and SparkFun electronics are working together to make cellular connected IoT design easier than ever before. They investigate the benefits that the Digi Remote Manager® brings to IoT design, the details of the SparkFun Digi XBee Development Kit, and how you can get started using a SparkFun Board for XBee for your next design.
May 21, 2024
134 views