feature article
Subscribe Now

Are RTOSes Dead?

The world no longer needs RTOSes. Linux can do it all.

Or so it was suggested at the recent RTECC conference, where none other than renowned embedded Linux booster Jim Ready of MontaVista (now Cavium) gave a presentation suggesting that, at this point, there is really no need for anything but Linux.

While that seemed more extreme than I had been thinking, this coincided with some mulling I had been doing as to whether the role of RTOSes was changing. It provided a perfect opportunity to check in with a couple of RTOS guys to see if they agreed that their days were numbered. You might think you know their answers – yet you might be surprised.

Before we go there, however, let’s review the landscape to understand the context of this discussion. Multicore is becoming more prevalent in embedded applications, but, as in the case of multicore applications processors in smartphones, the cores are often managed as a group with a single OS (often Linux) operating in the symmetric multicore processing (SMP) model.

SMP means that, to the OS, all cores look alike and have access to the same resources. SMP OSes have scheduling capabilities that, to first order, allow any task to be scheduled on any core at the whim of the OS. There are some knobs that can be turned to impact this, but, for the most part, you end up treating the scheduler like some mysterious man behind the curtain who giveth tasks and taketh them away again using occult divinations to determine what goes where when. In other words, it’s opaque.

Lots of applications just can’t work that way, and there are a couple of reasons why. One is the classic requirement for hard real-time response, where you have to be able to prove deterministically that certain events can occur with hard, guaranteed deadlines. An inscrutable scheduler does not fit that description.

The other reason is one of efficiency and performance: when you’re trying to squeeze every last cycle out of a system, you want to take control of the timing yourself and have the scheduler back off. This is the case with packet-processing systems, where the hardcore “fast path” that has to handle the majority of “typical” packets can’t mess about with shifting duties from core to core; the modus operandi is, “Shut up and work.” Having a meddling OS that keeps interrupting to see if anything needs scheduling – when you’ve set things up specifically not to move about – or performing any number of other “services” that you’d rather do without simply wastes precious performance.

The solution to these situations is to operate in an asymmetric multicore processing (AMP) model. Here different cores can run different OSes. You might have a couple of cores operating together as SMP under one OS and then other cores running a smaller RTOS or even no OS at all (so-called bare-metal operation, often assisted with a slim executive for bare-minimum services).

This is the status quo. The new proposal is that Linux can now cover all of the use cases together. Rather than having more than one OS, you can have a single installation of Linux and then tune how each core operates. This is based on two principles:

  • Linux can now perform as a real-time OS. Jim is quite passionate about this, displaying obvious frustration that this has been the case for 10 years, and yet still people dismiss the notion of real-time Linux.
  • MontaVista’s relatively new Bare-Metal Engine (of which little information is available online) can now provide that thin layer of minimal services needed to support bare-metal applications.

This means that, for new applications, there’s pretty much no reason to go anywhere besides Linux.

I wanted to test this hypothesis with two fellows that have a stake in the race: John Carbone, CEO of Express Logic, and David Kleidermacher, CTO of Green Hills. Both companies provide RTOSes, among other things.

So let’s get the biggest surprise out of the way right away. When I posed the above proposition, I was expecting a hearty and outraged refutation. In fact, both agree that Linux has improved mightily and may, in fact, be able to operate in real-time applications that wouldn’t have been possible in the past.

So do they think that their own products are going away? Of course not. There’s nuance here that we need to tease out.

First of all, there’s real time, and then there’s hard real time. And then there’s “prove it” real time. There appears to be little doubt that Linux can handle some real-time applications. But it depends on the deadline and how hard it is. David referred to LTE operations in wireless base stations: they have to be able to handle a specified number of packets in a fixed time to avoid dropping calls. This can be hard to achieve with Linux.

But here’s the real clincher: for the most demanding applications, it’s not enough to prove empirically that you can meet the deadlines. You can run for weeks without failing, but if you can’t show a calculation that proves deterministically that you can always meet the deadline, it’s no good. And that’s the trouble with Linux: it’s so complex that it would be virtually impossible to do such a calculation. While both David and John believed that, anecdotally, Linux might work in some apps, they had never seen a calculation settling the question once and for all.

And even if you did settle the question, it would all change with the next update.

With respect to overhead, John maintains that, especially with small, cheap processors, you really need an RTOS that’s been purpose-built with a small footprint and low overhead. While the high-end Cavium chips can certainly support Linux, he suggests that the lowest-end devices would have a harder time of it. He personally hasn’t seen much uptake of the BME option.

David takes things further, painting a picture of a changing environment that looks like neither the old model nor Jim’s vision. With increasing demands for security and stability, characteristics de-prioritized in many embedded systems to date, he sees virtualization playing an increasing role. For instance, you might have a ten-line hypervisor thread that does nothing but watch Linux and restart it if it crashes.

This also fits the model that companies like ARM are touting with technologies like TrustZone, where you have a secure minimal trusted compartment with its own OS surrounded by fortress walls and with a separate OS for the plebeians where all manner of shenanigans might take place that can’t disturb the sacred relics protected behind ten feet of stone.

David could see, for example, a simple 100-line security agent that doesn’t operate under Linux and therefore lies beyond its grasp. Virtualization underlies this, compartmentalizing the zones and keeping them from mucking each other up.

So, while everyone gives due credit to the improvements made to Linux, there is no unanimity in hailing a new Linux-only embedded paradigm.

And that probably comes as no surprise.

15 thoughts on “Are RTOSes Dead?”

  1. Bryon –

    Very nice article. I was also surprised by the reaction of Express Logic and Green Hills. However, there is a long-term market for the real-time operating systems offered by these companies. You mention “prove-it” hard real-time requirements, another place where these systems have a long term future is in medical devices and other safety critical aystems where “prove-it” goes beyond just the real-time performance, but also to the overall reliability of the system.


  2. The argument for using Linux rather than a RTOS is much more one of convenience than of technical merits. Although, even the convenience aspect can be discussed. How much effort is needed to get Linux alive an a new target?
    The main reasons why RTOS will not go away is that in the end hard and predictable, safe, secure real-time is not within reach of Linux.
    1. Safety.
    DO-178C now requires not a single line of dead or deactivated code and that every line of code has to be traced back to a requirement. Only small RTOS or code generation that removes unused code can provide that. We are talking Kbytes, not MBytes here. The smaller the code, the less that needs to be verified and certified.
    2. Security.
    Even embedded systems now have the issue. Remember the Stuxnet worm. It was really a hack into the WinCE front-end of the PLC controllers. Linux has the same issue. Penetrating a staticly linked RTOS on the other hand is a lot harder, if not impossible.
    3. Efficiency.
    What’s the interrupt response time? E.g. on a 50 MHz ARM, it can be 100’s of microsecs with Linux. Use a small RTOS and it can be sub-microsecond.
    How much cache has such an embedded processor? KBytes. If you have cache misses, the penalty can be 100’s of wait states. Small code size pays off. With an RTOS, a 20 MHz chip with 64KB or RAM might do the job. With Linux think 500 MHz en 1 GB of RAM.
    So, if you need low energy consumption, go for the RTOS.
    4. Hard real-time.
    While Linux can be used for soft real-time (response times follow a Poison distribution), this is not accepable for hard real-time. The curve must have a strict vertical edge whatever the application does.

    Many more arguments can be made. But how can Linux’s (or WinCE) convenience be used? Dedicate a node to it and have it transparently communicate with the real-time nodes where a real RTOS is used. This gives the best of both world.
    A message like “Now you can use Linux for everything” is actually a big disservice to the professionals in the embedded market. Engineers (and managers!) that follow this marketing slogan rather than facts are bound to hit the wall when they least expect it.

    Eric Verhulst

  3. Real-Time has many different definitions and Linux certainly satisfies “soft real-time” if you apply one of the real-time patches (Montavista, the RT set, etc).

    You can run Linux as a layer on top of a real-time OS as well (as is done with RTAI, for example) but there the Linux software is NOT generally run in real-time mode, only the software inside the RTOS is.

    Now, we get into the nitty-gritty of how we define harder levels of real-time.

    Generally, “hard real-time” provides guarantees of X amount of time in a given time window. After that, you can either be more deterministic on the start/end times or reduce the window size in which the guarantee will still hold true. These are not the same thing.

    As noted in TFA, it is exceptionally hard to actually provide rigid guarantees with an OS as complex as Linux. And what is meant by a guarantee, anyway?

    Remember, Linux – and many other OS’ – support nanosecond clock ticks and therefore can measure endpoints and runtimes in nanoseconds. You can also define windows in nanoseconds, too, obviously.

    Can Linux absolutely guarantee that those windows and runtimes will be honoured at the nanosecond level, over a long timeframe? Probably no. I would argue that you’d need something designed specifically to achieve that goal in order to get that kind of precision.

    That does not mean that you couldn’t have a general-purpose OS with that kind of precision, merely that it would have to be designed in, not bolted in.

    Could you bolt in a hard-enough-real-time for most purposes? Perhaps. I haven’t heard anything much from the Carrier-Grade Linux group for a while, which you will definitely need to achieve the long-term stability and predictability in terms of the different kernel modules.

    I have also seen very little work on the PPS module. The original code went stagnant some years back and I’ve seen very little alternative work since then. PPS is a synchronization mechanism that prevents drift – important because you can’t do guarantees over a long time if you have drift that would violate those guarantees within the timeframe you are looking at.

    If PPS and Carrier-Grade issues are sorted out, then mid-grade hard real-time is certainly achievable with Linux, but high-grade remains solely the province of specialized OS’. For now.

  4. if it wasn’t for the bloated code foot print caused by the excessively partitioned software architecture, hard realtime requirements, *AND* GPL requirements mandated by Linux … sure. Realistically, not a chance.

    I have a number of small RTOS projects, on things like smaller AVR32 chipts that will NEVER be Linux friendly because of the CPU performance, lack of a full MMU, and limited RAM/FLASH in the design. And I’m certain our clients do not want to hand full sources over to customers (and competitors) after the R&D costs.

  5. The biggest force working against Linux in embedded systems is the GPL. Where one could achieve adequate real-time-ness in a device driver, this option is often blocked by the GPL — forcing proprietary IP (the bread and butter of many companies) into user land.

    On the other hand, embedded designers are often able to push real-time sub-systems into VHDL or to a smaller slave processor which most likely runs a small RTOS.

    Add to that the fact that small, free or low cost RTOSs like FreeRTOS are perfectly fine for many systems.

    This leads me to conclude that 1.) Small RTOSs will be around for a long time and 2.) commercial RTOS vendors are feeling the squeeze.

    Look for a shakeout in the market and try to pick a RTOS that will be around 10 years from now. Good luck!

  6. I’m going to take a different direction:

    I think that traditional small RTOSes will lose (some) ground to deterministic hardware that supports multitasking. It saves the headache of using the RTOS and possibly the license costs too.

    A prime example are the XMOS chips (that Eric Verhulst of the comment above is likely familiar with) that support 8 concurrent tasks per core. It is sort of a barrel processor and each task gets a fair share of the available processing power, 100% guaranteed in hardware.
    Together with an event-driven architecture these chips are able, for example, to implement an ethernet MAC fully in software while still being able to run less real-time tasks such as a small webserver on the remaining hardware threads.

  7. Pingback: pax 3 directions
  8. Pingback: Dungeon
  9. Pingback: seedboxes
  10. Pingback: kari satilir
  11. Pingback: iraqi geometry
  12. Pingback: satta matka
  13. Pingback: DMPK
  14. Pingback: uodiyala

Leave a Reply

featured blogs
May 8, 2024
Learn how artificial intelligence of things (AIoT) applications at the edge rely on TSMC's N12e manufacturing processes and specialized semiconductor IP.The post How Synopsys IP and TSMC’s N12e Process are Driving AIoT appeared first on Chip Design....
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Altera® FPGAs and SoCs with FPGA AI Suite and OpenVINO™ Toolkit Drive Embedded/Edge AI/Machine Learning Applications

Sponsored by Intel

Describes the emerging use cases of FPGA-based AI inference in edge and custom AI applications, and software and hardware solutions for edge FPGA AI.

Click here to read more

featured chalk talk

Introducing QSPICE™ Analog & Mixed-Signal Simulator
Sponsored by Mouser Electronics and Qorvo
In this episode of Chalk Talk, Amelia Dalton and Mike Engelhardt from Qorvo investigate the benefits of QSPICE™ - Qorvo’s Analog & Mixed-Signal Simulator. They also explore how you can get started using this simulator, the supporting assets available for QSPICE, and why this free analog and mixed-signal simulator is a transformational tool for power designers.
Mar 5, 2024