feature article
Subscribe Now

Interview: RISC-V CTO Mark Himelstein

What’s it Like to Lead an Open-Source Hardware Group?

“One man’s constant is another man’s variable.” – programmers’ wisdom

Most engineering firms have a CTO. The Chief Technical Officer sets the technical direction, guides the research, gives the engineers their marching orders, and in between lectures strokes his beard and thinks deeply technical thoughts. 

But what does the CTO of an open-source consortium do? How do you manage a staff or set technical direction when there’s no staff to manage, no product deadlines, and no quarterly profit goals? 

RISC-V International’s brand new CTO Mark Himelstein has the beard to stroke, and he seems to have a good idea of the group’s technical direction. He’s also got some interesting ideas about what makes for a good processor and how to get there. We talked last week about his role, the status of RISC-V, and where he wants to take it. 

First, what does the CTO actually do? “I want to get as many RISC-V cores and chips deployed around the world as possible to a broad set of industries. I want to bring to closure the things we’ve started. I want to solve the gaps in some application areas. And I bring a holistic view to all of our projects.” 

Nothing in that description touched on instruction sets, hardware pipelines, superscalar dispatch, branch delays, cache management, or any other species of architectural arcana. Instead, he’s focused on “the big picture” and what makes RISC-V unique: its community. 

In most companies, the CTO would point toward future architectural decisions and maybe lead the design team. Not so at RISC-V. “We already have a strong team of architects,” he demurs. That’s not because he can’t. Himelstein was employee #45 at MIPS, he ran Solaris development at Sun, he founded Graphite Systems, and he was CTO at Quantum Corp. The man knows both hardware and software development. And being a switch-hitter is likely what got him the job. 

“ISA is important, but it’s just the tip of the iceberg,” says Himelstein. He doesn’t dictate the evolution of the RISC-V instruction set architecture. In fact, nobody does. It’s a community effort. RISC-V International’s members are mostly volunteers with day jobs elsewhere. “Members have customers, and customers have requirements.” If there’s enough demand for an ISA extension, improvement, or addition, then someone will propose it. But any such proposal has to include at least two members to champion it or it’s got no future. The idea is to raise a modest barrier just high enough that ISA extensions don’t get proposed frivolously. Someone has to really want it and be willing to lobby for it. Once that hurdle has been cleared, it goes on the waitlist of new changes for the larger group to consider. 

“The ISA is important, but it’s the smaller part [of the proverbial iceberg],” he says. The larger part? The ecosystem. “The ISA is useless without the ecosystem.” 

True dat. Plenty of engineers have designed a clever CPU architecture – doesn’t everyone? – but few see the light of day because there was no momentum behind them. Building a successful CPU isn’t about clever pipelines; it’s about industry support. Processor families are like social clubs: you’ve got to want to join. 

RISC-V seems to have already passed that invisible line where third-party support starts to snowball. It’s edging toward the big time. But Himelstein sees much work to be done. In the commercial world, CPU vendors will often “encourage” software developers with large sacks of cash. Write a compiler for our new Processor X and we’ll subsidize the development for the first three years. Lather, rinse, repeat until the software base becomes self-sustaining – or not. 

RISC-V doesn’t have the necessary sacks of cash to spread around, however. The brute force approach is closed to Himelstein and his colleagues, so how does he motivate developers who might be on the fence? “I’m working on it,” he admits. “Look at Linux, at Hadoop, at Eclipse, at Apache… They grew up around the contributor model. Contributors to Hadoop are rock stars. It’s exciting. There’s cachet. It’s like being in an exclusive club. It’s hard to say how that happened. It just evolved.” 

He contrasts that process to seemingly similar open-source processors like OpenSPARC or OpenPower. Those examples are ex post facto open source, he says. They started out as proprietary commercial products (at Sun and IBM, respectively) and then backed into the open-source world after the fact. “They just hopped on the open-source train.” Nobody in those groups seems to have the same level of enthusiastic self-motivation that you see in, say, Hadoop or Linux circles, he says. “We want to be more like Linux or Hadoop.” 

Doesn’t crowd-sourcing the CPU’s hardware evolution and its software support lead to fragmentation? How do you balance customization versus compatibility? “We have the benefit of history,” Himelstein points out. “We can define the base stuff to make Linux work, but also support advances like vector extensions. We’ve watched other architectures offer too many variations to the Linux guys.” 

Instead, RISC-V defines profiles, which are more than just a subset of the instruction set. Profiles define a target application and include memory ordering, device tree, ISA, and other details. “Think of it in a C++ or Java kind of way. There’s a base profile to provide application compatibility, and custom profiles can override the base profile.” 

Many CPU families seem to take a three-pronged approach, with low-, medium-, and high-performance variations of the base architecture (think Cortex-A, Cortex-R, and Cortex-M). Will there be three RISC-V profiles? 

“There’s already more than three,” he laughs. “We want RISC-V to cover everything from IoT to HPC [high-performance computing]. We’ve already got databases running on RISC-V servers. The runway [for new developments] determines when things get done. IoT is easier to do, so it gets done sooner. Customers’ hardware-design cycles and product lifecycles are all different. RISC-V International can’t control that.” 

“It took ARM 30 years to get into laptops and supercomputers. We have none of the royalty issues that ARM had to grow through.” 

What’s next for RISC-V? “RISC-Six,” jokes Himelstein. 

Okay, maybe not. Prof. David Patterson was the force behind RISC-V, so whatever he decides to create next might logically become RISC-VI, but that’s not on anybody’s agenda. 

Mark Himelstein, like RISC-V International, takes a community-minded approach to RISC-V’s evolution. It will become whatever its members and users want it to become, growing and evolving over time based on continual feedback, not by enforcing one person’s vision. That seems to suit everyone involved. Anyone can contribute – or try to, anyway – and anyone can deconstruct it, modify it, or do unspeakable things to it with no permission required from anyone. The only thing you can’t do is call it RISC-V, unless it’s been approved by Himelstein’s group. “We do have some standards.” 

[updated Aug 6 to clarify the nonexistence of RISC-Six]

2 thoughts on “Interview: RISC-V CTO Mark Himelstein”

  1. I’m happy to inform you that I’m perfectly happy with RISC-V, and I am not working on RISC-VI.

    I’m currently working on a video to celebrate the 10th birthday of RISC-V, which started May 18, 2010. We’re interviewing people involved over it’s first decade, and asking them to talk about the impact of RISC-V today and in the future.

    A common theme is that RISC-V is already a major force in the industry despite only starting to be shipped in products recently, and that it has a chance to become the dominant instruction set architecture in the future.

    The birthday of the IBM System/360 was April 7, 1964, and its descendants still drive the $10B per year mainframe market that shows no signs of ending.

    I predict RISC-V will be vital as long as the IBM System/360 has, and be even more dominant as it matures.

  2. Is this just doing the same thing over again and expecting different results? Cache, branch prediction, out of order execution … ho-hum, yawn …

    ““The ISA is important, but it’s the smaller part [of the proverbial iceberg],” he says. The larger part? The ecosystem. “The ISA is useless without the ecosystem.”” Not only that, but justifying the development for a new chip is a big deal. Point is that every time the ISA is tweaked, the compiler, debugger, (assembler, if anyone is crazy enough to believe that anything will ever be written in assembler language) will have to be changed. Then as always verification.

    The load, add, store, branch, (ISA) cpu has reached its limit. Heterogeneous computing is the future. And the best part is that an FPGA already has the necessary blocks available so that a new chip does not have to be developed with all the associated time and cost.

    And there is a compiler API available that does most of the heavy lifting and by the way there is a debugger behind that compiler.

    Source code can be compiled and executed just as it would run on the FPGA.(debugging included)

Leave a Reply

featured blogs
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 24, 2024
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Autonomous Robotics Connectivity Solutions
Sponsored by Mouser Electronics and Samtec
Connectivity solutions for autonomous robotic applications need to include a variety of orientations, stack heights, and contact systems. In this episode of Chalk Talk, Amelia Dalton and Matthew Burns from Samtec explore trends in autonomous robotic connectivity solutions and the benefits that Samtec interconnect solutions bring to these applications.
Jan 22, 2024
13,155 views