feature article
Subscribe Now

Second Wind: The New Wind River

Software Company Looks Fighting Fit Following its Intel Spinoff

“Meet the new boss; same as the old boss.” – Roger Daltry

There’s a humongous redwood tree (sequoiadendron giganteum) in Yosemite National Park called the Grizzly Giant. It’s over 200 feet tall and almost 100 feet around its circumference. It’s astonishingly massive and probably two thousand years old. But the real point, says Wind River’s CEO Jim Douglas, is that it’s only the 25th largest tree in the world.

Like the tree, it’s a wonder that Wind River is still standing. The company has gone from major big-league RTOS company, to Intel acquisition, and then back to major big-league software company. Given Intel’s spectacularly poor track record with acquisitions – some call it the Software Undertaker – it’s remarkable that there even was a Wind River left for Intel to sell. Like Jonah in the whale, they got spit out and somehow survived.

But that’s all in the past. What’s ahead is a bold new vision of… wait for it… the Internet of Things.

Yes, Wind River is singing the IoT theme song, but, as usual, the company has its own spin on things. In a wide-ranging discussion that involved microprocessors, redwood trees, and electric motors, Jim Douglas and I talked about the company’s plans as a newly independent software provider.

First off, Wind River has almost 1200 employees, so they’re big. Not Microsoft or Oracle big, but big as embedded software companies go. And they’re still based on the island of Alameda, just a ferry ride from San Francisco and a stone’s throw from Berkeley.

Intel acquired the company in 2009 for $884 million, a big deal even by Intel’s standards. Nine years later, the two parted company on amicable terms. The tie-up never generated any particular synergy, nor did it spew out Intel-style levels of profit for the chipmaker. “Intel is probably the only hardware company where buying a software company is dilutive,” says CEO Douglas. “Most times, a software acquisition is a step up.”

The nine-year marriage wasn’t terrible (and obviously not fatal), but it was a bit… constraining. Wind River’s software – RTOS, tools, middleware, cloud services, etc. – had always been processor-neutral, running on PowerPC, MIPS, x86, ARM, and all the usual suspects. But customers couldn’t help feeling that x86 was always the preferred solution, and other CPU vendors felt nervous sharing their plans with an Intel subsidiary. “We still had access to other CPU vendors’ roadmaps, but we lost the right to have strategic CPU relationships,” said Douglas. Now that they’re independent again, CPU vendors are once again more open about their hardware plans and software desires.

Wind River built its reputation as an embedded player, albeit a premium one that avoided the bottom end of the MCU market. That’s still the strategy going forward, though now it means upping the game several notches with much faster hardware and more complicated software.

“We want to bring the principles of enterprise computing down to the edge,” says Douglas. That means lots of virtualization running on comparatively well-provisioned systems. Wind River is less interested in low-cost MCUs than it is in bigger systems that have been compartmentalized into real-time and non-real-time application areas. Douglas cites VMware as a role model. “VMware took an old idea and made it into a good commercial product to consolidate compute onto a single platform, anywhere. It was a breakthrough for the economics of the datacenter.”

His reasoning is economic – the customer’s, not Wind River’s. A good 15% to 50% of its customers’ bill of materials was spent on the processor, says Douglas. Yet, that equipment often has to last 15–20 years in the field. Running critical industrial, automotive, or medical applications on 20-year-old embedded systems seems ludicrous, yet updating that hardware every few years isn’t economically viable, either. You can’t keep scaling up a million microcontrollers or microprocessors to take advantage of the latest technology; that’s too disruptive (and expensive). Instead, Wind River sees industrial enterprises deploying bigger, more centrally located machines and virtualizing their software workload – just like datacenters do now.

Bigger, faster, computers are also better equipped to handle machine learning (ML), a critical technology going forward, in his view. ML is going to be key, and you can’t do ML on a cheap microcontroller or a thousand distributed nodes. Big iron is the solution, with big software running on it. His vision of “fluid” or “elastic” computing would allow applications to move up or down layers of topology, moving up for more performance or down to be closer to incoming data. Containers, dockers, and/or virtualization will smooth the transitions.

We’ve all heard the stories about early electric motors in the 1900s, and how they changed the factory floor. But Douglas thinks there’s a different lesson to be learned there. At first, electric motors simply replaced steam-powered (or human- or horse-powered) machinery, one for one. The overall structure of the industrial plant stayed the same. It was just cleaner and less noisy.

Over time, however, the whole “paradigm” changed. Electric motors’ small size allowed them to be embedded in all kinds of equipment, even into hand-held saws, drills, and punches. Douglas sees parallels, not with these MCU “motors” distributed throughout the IoT, but with data connectivity. He makes a distinction between automation and autonomy. We already have automation, thanks to those motors and their modern semiconductor equivalents. What we need is autonomy, and that requires machine learning with the type of large-scale connectivity, performance, and software transportability that Wind River is developing.

It’s not a big pivot for the company, conceptually, but it is a big project nonetheless. (More likely, a series of projects.) In accordance with Newton’s Third Law of Software Development, something has to give. What is Wind River abandoning to free up resources for this new vision?

Helix Device Cloud, for one. The company’s IoT cloud service (formerly called Wind River Edge Management System) is getting the boot four years after it was announced. In its place, Wind River has decided to embrace the gaggle of existing cloud providers and become officially cloud-agnostic. Some Intel-specific projects are also on the chopping block, which isn’t surprising.

And how do Wind River’s customers feel about all of this? Are they buying into Douglas’s vision for the IoT future? “Software is eating the world,” he says, channeling Marc Andreesen. “Value is accruing at the higher levels of the software stack.” In other words, there’s real value in providing the complicated, high-level stuff and not competing for the (comparatively) generic RTOS, tool, cloud, or service business.

Large-scale industrial customers (think Proctor & Gamble, ExxonMobil, etc.) “are clamoring for this,” he says. “But their supply chain doesn’t understand how to attribute value to the software and doesn’t have a business model to monetize it.” So, Wind River is doing a bit of evangelizing, bringing its hardware partners around to the new way of doing things. The end users want it; Wind River wants it. They just need to convince the guys in the middle. “There’s good energy there, but not big momentum yet.”

One thought on “Second Wind: The New Wind River”

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

S32M2 Integrated Solutions for Motor Control
In this episode of Chalk Talk, Raghavan Nagarajan from NXP and Amelia Dalton explore the challenges associated with software defined vehicles, the benefits that S32M2 integrated solutions for motor control bring to this arena, and how you can get started using these solutions for your next automotive design.
Nov 21, 2024
16,676 views