feature article
Subscribe Now

Nuvia: Designed for the One Percenters

Secretive CPU Startup Aims to Power Massive Datacenters

“I’d love to be incredibly wealthy for no reason at all.” – Johnny Rotten

Among sports car aficionados, a “Super Seven” is a 1960s-era Lotus: light, fast, nimble, and characteristically fragile. Marvel superhero Wolverine drives one; the unnamed protagonist in The Prisoner famously had one, too. 

This is not one of those. 

To computer heavyweights, the “Super Seven” means Alibaba, Amazon, Baidu, Facebook, Google, Microsoft and Tencent. They’re the biggest of the big datacenter users – companies so large that they design their own computers, right down to the custom microprocessors. They’re the equivalent of a computing black hole. They have their own gravity and suck in everything around them. The datacenter universe revolves around them. 

So, it’s no surprise that processor vendors (Intel, AMD, et al.) focus their high-end development efforts on the needs of the Super Seven. Do you need AI acceleration? Right away, sir, we’ll get our best engineers right on that. Better multithreading performance? As you wish. 

But it’s tough to compete with your own customers, especially when those customers are bigger, wealthier, and more attuned to their performance needs than you are. Google’s in-house Tensor Processing Units, for example, are exactly what Google wants them to be. Plus, Google doesn’t necessarily need those chips to be profitable. The company makes its money elsewhere, not by selling silicon. That’s frustrating for commercial chipmakers, who yearn for the profit margins and prestige of supplying a top-tier datacenter, yet also need to make a steady profit on chips that fit with the rest of its product line. 

In between these two colliding superpowers we have an itty, bitty little startup called Nuvia. The Silicon Valley company numbers just 60 people (for now) and hopes to position itself smack in the middle of this fray by creating ultra-high-end processors aimed directly at the Super Seven. 

The first Nuvia chips are still years away from shipping, but the company isn’t hurting for audacious goals or brand-name talent. The CEO and heads of engineering are all ex-Apple and Google chip designers, the marketing VP comes from Intel, and Nuvia just scooped up a large number of CPU designers in Austin following Samsung’s recent layoff there. The company reportedly has $53 million in venture backing, including an unspecified investment from Dell Technologies Capital. Yes, the financial arm of the company that makes a whole lot of x86-based servers. 

(Nuvia is also under a bit of a legal cloud, as Apple has sued one of the founders for taking a little too much of his previous employer’s IP with him on the way out the door.) 

Will Nuvia’s processor be an x86 clone? Probably not, although the company isn’t saying. Marketing VP Jon Carvill describes their CPU as a “clean sheet design, unencumbered by legacy infrastructure. It’s a custom in-house CPU design and we’re not using a licensed CPU core.” That suggests that Nuvia is creating its own custom design, not just tweaking a familiar ARM, MIPS, x86, RISC-V, or other familiar architecture. 

Won’t that present compatibility problems? After all, many a clever CPU design has faltered because there was simply no software for it, while lesser CPUs have succeeded through an abundance of legacy code. “These customers have enormous software development teams,” says Carvill. Amazon, Google, and the rest already write their own code, and they have no problem rewriting it again if it means gaining a significant price/performance advantage. Selling to the Super Seven isn’t like selling in the merchant market. Third-party ecosystems don’t apply. 

But Carvill’s statement about not using a licensed CPU core doesn’t necessarily mean the company isn’t using a licensed CPU architecture. Apple, Google, and Samsung all designed custom ARM implementations under license from ARM. Those companies weren’t using ARM’s hardware designs – the cores – but they were using the company’s ISA. It’s possible that Nuvia’s processor will be compatible with an existing CPU family, even if the hardware implementation is unique. 

Carvill goes on to say, “There will be ISA compatibility for backward compatibility.” Such compatibility might be more for convenience than for performance, however. In other words, the Nuvia processor might be able to run ARM (or whatever) code in addition to its “real” instruction set, which is likely to be tuned for hyper-scale datacenter applications. The “backward compatibility” he describes would be helpful for bringing up legacy code while the programmers work on a native Nuvia implementation. 

This wouldn’t be the first time that a familiar CPU architecture rode sidecar on a custom CPU. Esperanto, for example, uses the RISC-V architecture as a sort of underlying chassis supporting the chip’s “real” instruction set, which is focused on AI. Similarly, Wave Computing builds its processor on top of MIPS (which it now owns), even though MIPS compatibility isn’t the real goal. 

The messaging from Nuvia is that the company is going after ultimate performance – no half measures. “We’re not scaling up from a mobile world,” says Carvill, an apparent swipe at ARM’s humble beginnings. Nuvia’s processor will be a beast, and any software compatibility with x86, ARM, PowerPC, or whatever will likely be incidental. It must be liberating to design an all-new CPU with ultimate performance in mind, while concerns about power consumption, cost, and software compatibility take a back seat. 

The top 1% live differently. Oprah Winfrey flies in English muffins from Napa Valley. Jeff Bezos probably has a Zen garden decorated with moon rocks. The datacenter Super Seven do things their own way because they can, including writing their own code and developing their own hardware. Nuvia is going after that market. Small, but huge. And extremely rich. Good luck to ’em.

Leave a Reply

featured blogs
Apr 23, 2025
Just when I thought the day was as strange as it could get, I ran across this video'¦...

featured paper

How Google and Intel use Calibre DesignEnhancer to reduce IR drop and improve reliability

Sponsored by Siemens Digital Industries Software

Through real-world examples from Intel and Google, we highlight how Calibre’s DesignEnhancer maximizes layout modifications while ensuring DRC compliance.

Click here for more information

featured chalk talk

Machine Learning on the Edge
Sponsored by Mouser Electronics and Infineon
Edge machine learning is a great way to allow embedded devices to run applications that can collect sensor data and locally process that data. In this episode of Chalk Talk, Amelia Dalton and Clark Jarvis from Infineon explore how the IMAGIMOB Studio, ModusToolbox™ Software, and PSoC and AURIX™ microcontrollers can help you develop a custom machine learning on the edge application from scratch. They also investigate how the IMAGIMOB Studio can help you easily develop and deploy AI/ML models and the benefits that the PSoC™ 6 Artificial Intelligence Evaluation Kit will bring to your next machine learning on the edge application design process.
Aug 12, 2024
56,334 views