feature article
Subscribe Now

Is MIPS Poised to Take the RISC-V World by Storm?

Sometimes the world can be a funny old place. Take computer companies, for example. Some (like IBM) seem to have been around forever, and they also seem destined to stay around forever. Others flicker in and out of existence so quickly that most folks are never even aware they existed in the first place. Still others pop up and down as if they were engaged in a deranged game of corporate Whac-A-Mole.

Take MIPS, for example. If I were to attend a technical conference and proclaim, “MIPS is back!” Half the people would say, “I didn’t even know they’d gone away,” while the other half would respond, “MIPS who?”

MIPS was one of the pioneers of Reduced Instruction Set Computer (RISC) architectures. Prior to RISC, computers were based on Complex Instruction Set Computer (CISC) architectures. The easiest way to think about this is that a single instruction can execute a series of low-level operations in a CISC machine. Some advantages promoted by proponents of CISC are programs with higher code density and smaller memory footprints that make fewer accesses to the computer’s main memory, all of which were particularly important in the days of yore when main memory was physically large, extremely expensive, and horrendously (by today’s standards) slow.

By comparison, RISC machines employ simple instructions, each of which performs a single low-level operation. As compared to CISC, this means programs have larger memory footprints and make more accesses to main memory (frowny face). Offsetting this, today’s memory is (relatively) cheap, and contemporary memory controllers use a multiplicity of sophisticated strategies to satisfy the data bandwidth requirements of present-day processors (happy face). But the real benefits of RISC are to increase the speed of each instruction, to provide predictable execution times, and to facilitate highly-efficient multi-stage instruction pipelines (happy, happy face).

It’s fair to say that there are myriad interpretations on the origins and evolution of RISC. It’s also fair to say that the term CISC was coined retroactively in contrast to RISC, one result of which is that any architecture that’s not RISC tends to have a CISC label slapped on it. Be this as it may, two seminal RISC projects were Stanford MIPS (where MIPS stands for Microprocessor without Interlocked Pipeline Stages) and Berkeley RISC. These were commercialized in the 1980s as the MIPS and SPARC systems.

MIPS Computer Systems (later MIPS Technologies and now simply MIPS) was founded in 1984 to commercialize the work being carried out at Stanford University on the MIPS architecture. I remember the excitement when the MIPS architecture was formally adopted by Silicon Graphics (SGI) for use in its 3D graphics workstations. As a company, MIPS has bounced back and forth between being privately and publicly owned so many times that it makes my head hurt. A couple of occasions that stand out in my mind are when MIPS was acquired by Silicon Graphics in 1992, when it was spun out in 2000, and when it was acquired by Imagination Technologies in 2012, after which things became… interesting.

The key point to note about all of this is that MIPS is known for designing awesome processors with sophisticated pipelining, multithreading, and hardware virtualization. As part of this, they have tremendous expertise in creating cache-coherent processor clusters, cache-coherent clusters of clusters, and (in order to avoid any chances of a cluster f…umble), their own cache-coherent Network-on-Chip (NoC).

Although people are perhaps not as familiar with the name MIPS as they are with the Arm, AMD, and Intel monikers, MIPS is still a major player, with billions of MIPS-based chips sold (quantities that today’s up-and-coming RISC-V companies can only dream of). The original architecture is still widely used in embedded systems and certain high-performance computing applications. For example, as much as Arm is known in automotive (they have awesome marketing), about 60% to 70% of the world’s Advanced Driver Assistance Systems (ADAS) run on MIPS processor cores.

In 2021, MIPS announced that it was transitioning to designing processor intellectual property (IP) cores based on the RISC-V instruction-set architecture (ISA). The great thing about an ISA is that it tells you what to do but not how to do it, thereby allowing companies to create processors that execute the same code but are differentiated in terms of things like power and performance.

When you think about it, MIPS + RISC-V is a marriage made in heaven, as it were. First, you can’t get more RISC than RISC-V. Second, RISC-V is gaining massive amounts of traction in the market, all supported by an exponentially exploding ecosystem. And third, you’d have to go a long way to find a team that knows more about RISC architectures than the team at MIPS.

The reason I’m waffling on about all this here is that—as I pen these words—MIPS is unveiling a new corporate brand at CES 2004. As part of its new brand unveiling, the guys and gals at MIPS will have live demos showcasing real-time system deployments in their private hospitality suite at the Venetian hotel (you can request a private meeting by completing this form).

Meet the new MIPS at CES 2004 (Source: MIPS)

I was just chatting with Sameer Wasson, who is the CEO at MIPS. Prior to donning the undergarments of authority and striding the corridors of power at MIPS, Sameer spent 18 years at Texas Instruments (TI), most recently as Vice President, Business Unit (BU) Manager, Processors.

Sameer told me that MIPS is focusing on interesting problems that require high-performance coupled with high data movement, such as artificial intelligence (AI), machine-learning (ML), and high-performance computing (HPC). All their RISC-V cores are 64-bit. They’ve already announced a superscalar out-of-order core, they have an in-order core in development, and they are starting to work on plans for a high-performance microcontroller. They have 4- and 8-core clusters, clusters of clusters, and—as we previously mentioned—their own cache-coherent NoC. In the case of user-created IPs and third-party IPs, the MIPS RISC-V IP supports both Arm AXI (non-coherent) and Arm CHI (coherent) interfaces.

They’ve also paid particular attention to easing the task for existing customers based on the original MIPS architecture to transition to the new MIPS RISC-V architecture (a re-compile of the code is typically all that’s required). And they are leveraging their strengths with respect to pipelining, multithreading, and hardware virtualization.

As Sameer says, “Compute in an AI-centric world needs to evolve to efficiently handle data movement, and that’s where the MIPS RISC-V based cores excel. Our focus is helping our customers innovate and create systems that deliver the most efficient data movement, deterministic low latency, and real-time processing.”

Sameer closed by telling me about some of the exciting things that are going on at MIPS, including boosting the executive team (Drew Barbier and Brad Burgess—previously at SiFive—have just joined as VP of Products and Chief Architect, respectively), opening new offices in new cities, and building up the engineering and chip architecture teams. To be honest, if I were a younger man, I’d be tempted to apply for a position at MIPS myself. How about you? Do you have any thoughts you’d care to share on anything you’ve read here?

6 thoughts on “Is MIPS Poised to Take the RISC-V World by Storm?”

  1. The two Jim Turley articles pretty much covers things.

    RISC was a bad idea in the first place. Worse, it was/is so poorly defined it can be anything you watt it to be except CISC(which is also poorly defined).

    As I remember, Si Five was the first to jump on the RISCV bandwagon. Now looks like a wheel or two fell off. It was such a big deal that it was open source so users would not have to pay license fees ala ARM.

    Yeah, they could design their own, but there are design/debug costs as well as fab costs and the eco system has to evolve.

    And Patterson got his Turing Award then vanished.

    Looks like the Si Five boat may be sinking.

    Meanwhile IBM is still selling processors that run Cobol.

    Another meanwhile there is an open source CSharp compiler API.

    So What? RISC/CISC are moot points made obsolete by an Abstract Syntax Tree and a Syntax Walker.

  2. Well, Karl, I can only find most of your comments uninformed.

    At the time, circa 1984 and 1985, MIPS (developing the R2000) had no access to fancy FABs like those owned and completely controlled by Intel and Motorola. They had to broker FAB use with systems two steps behind those high-end FABs. This meant competing on performance using 100 times fewer transistor equivalents (transmission gates and inverters, basically.) They did a yeoman’s job of it, I think.

    I personally visited MIPS and met with Dr. Hennessey for a couple of afternoons (personal 1:1 discussions) in early 1986. I was quite impressed. I was also developing accelerator boards for the PC. There is no question, 20/20 hindsight, that what we achieved together at that time could NOT have been achieved any other way.

    It was in no way a “bad idea.” Quite to the opposite.

    I also worked on the BX chipset at Intel, in the late 1990s, by the way. My opinion isn’t an ignorant one.

    The problem that MIPS faced at the time was that Intel would incorporate all these RISC ideas into their Pentium Pro (which I worked on.) The ROB (re-order buffer) inside the PPro and later P II, etc. is.. RISC!

    1. You have your opinions and I have mine. One of mine is that “RISC” really means that compilers could not figure out how to compile anything more than Load/Add/Store/Branch at that time.

      If you, or anyone can provided a definition of RISC maybe we can continue. out of order dates back to IBM Mod 91. That was before cache was invented and was based on interleaved memory which disappeared when cache was invented.

      Now there is instruction cache and cache coherence which makes no sense because programs are compiled and loaded at run time, never to be changed. There is address translation so instructions can be anywhere memory allocation and operating system determines. There can be no self modifying code as it was in the assembler days. And no, I don’t know where, if, or how much JIT is used,

Leave a Reply

featured blogs
Mar 4, 2024
The current level of computing power is unprecedented. It has led to the development of advanced computational techniques that enable us to simulate larger systems and predict complex phenomena with greater accuracy. However, the simulation of turbomachinery systems still pos...
Mar 1, 2024
Explore standards development and functional safety requirements with Jyotika Athavale, IEEE senior member and Senior Director of Silicon Lifecycle Management.The post Q&A With Jyotika Athavale, IEEE Champion, on Advancing Standards Development Worldwide appeared first ...
Feb 28, 2024
Would it be better to ride the railways on people-powered rail bikes, or travel to the edge of space in a luxury lounge hoisted by a gigantic balloon?...

featured video

Tackling Challenges in 3DHI Microelectronics for Aerospace, Government, and Defense

Sponsored by Synopsys

Aerospace, Government, and Defense industry experts discuss the complexities of 3DHI for technological, manufacturing, & economic intricacies, as well as security, reliability, and safety challenges & solutions. Explore DARPA’s NGMM plan for the 3DHI R&D ecosystem.

Learn more about Synopsys Aerospace and Government Solutions

featured paper

Reduce 3D IC design complexity with early package assembly verification

Sponsored by Siemens Digital Industries Software

Uncover the unique challenges, along with the latest Calibre verification solutions, for 3D IC design in this new technical paper. As 2.5D and 3D ICs redefine the possibilities of semiconductor design, discover how Siemens is leading the way in verifying complex multi-dimensional systems, while shifting verification left to do so earlier in the design process.

Click here to read more

featured chalk talk

What are the Differences Between an Integrated ADC and a Standalone ADC?
Sponsored by Mouser Electronics and Microchip
Many designs today require some form of analog to digital conversion but how you implement an ADC into your design can make a big difference when it comes to accuracy and precision. In this episode of Chalk Talk, Iman Chalabi from Microchip and Amelia Dalton investigate the benefits of both integrated ADC solutions and standalone ADCs. They discuss the roles that internal switching noise, process technology, and design complexity play when choosing the right ADC solution for your next design.
Apr 17, 2023
36,484 views