feature article
Subscribe Now

From Servers to Smartphones

Virtualization Invades Embedded

The rack of heavy-iron blade servers whirrs away, dishing up data and mashing myriad algorithms for hundreds of users throughout the corporate campus.  Most of them have no awareness whatsoever of the physical location of the computing resources doing their dirty work.  Day by day and week by week the blades and racks are removed, replaced, upgraded, and re-tasked.  One capacity increase is accomplished by stacking in additional blades, the next by replacing an outdated rack with a new, smaller one with fewer blades and vastly more processing power.

Those of us who have become accustomed to the abstraction of centralized computing resources barely give a conscious thought to the role of virtualization.  With roots going back to the mainframe monsters of the 1960s, virtualization is a well-entrenched technology separating the physical computer doing the work from the virtual machine we access for our computing needs.  We accept that our data is safe on a machine somewhere, and we go along with our business, visualizing a single repository for our data or a single processor doing our bidding.  We are vaguely, if at all, aware that virtualization software – perhaps something from VMware – is masking the mechanics behind moving all that responsibility around among various pieces of physical equipment.

Whip your advanced smartphone out of your pocket, however, and the idea of server virtualization seems far away.  The tiny ARMs  and microscopic memory resources on the system-on-chip that patches us through to Aunt Julie while pausing our Sudoku or Scrabble game and serving up a rich message snapshot from Sophie on her Hawaii vacation don’t bear much resemblance to the power-sucking, fan-cooled, heat-sink-havin’, Giga-flopping server stacks in the computing lab. 

Our little world of embedded computing is not so separate from servers after all, however, if you look at the recent trend toward virtualization in embedded computing.  The idea of abstracting away the specifics of physical hardware from applications and operating systems is finding widespread use in the embedded space.  The reasons for adopting virtualization technology are as diverse as the companies, tools, and software that service them, however.  To get a grip on embedded virtualization and the ways it is being applied, it might help to define a few basic concepts.

Most server installations depend on the concept of virtualization.  The original idea behind this was to allow multiple copies of a single operating system to run on top of the virtualization layer, where each instance of the OS behaved as if it were installed on a stand-alone machine.  This allowed multiple users to share a single piece of physical hardware with the appearance that each user had his or her own dedicated machine.  This same concept is used in embedded systems to create logical partitions that separate instances of the OS performing different tasks.  This separation simplifies the coding of individual applications.

In embedded systems, this concept is extended to many-to-one and one-to-many mappings of physical computing resources such as processors, memory, or storage to virtual resources.  In mobile computing, for example, security is a priority.  Hackers would typically try to install an application on a device such as a smartphone that would be able to access resources on the baseband side, gaining unauthorized access to the network.  To prevent such attacks, virtualization allows us to separate instances of user-level applications from protected partitions running critical system functions.

Beyond the idea of security – reliability also motivates virtualization.  Again, in devices such as smartphones, the critical operating functions of the system need to be protected from potentially buggy and unreliable applications from unknown sources installed by the user.  Additionally, some applications running on a machine may need to operate on an RTOS in real time, while others may run on applications operating systems with flexible time constraints.  Some virtualization layers such as VirtualLogix (formerly Jaluna) VLX allow this capability. 

Most embedded virtualization approaches rely on a thin abstraction layer – usually called a “hypervisor.”  In some instances, the hypervisor mimics the exact interface of the underlying hardware, and therefore client operating systems and applications can run transparently on the virtualization layer as if it were a native core.  In other cases, the hypervisor has a different external interface from the real machine, and therefore the operating system must be ported to run on the hypervisor.  In still other instances, the hypervisor mimics the external behavior of a different underlying hardware platform, providing emulation or cross-platform execution of client operating systems.

Of course, adding a layer between the physical hardware and the operating system inevitably sounds our performance alarms.  Intuitively, we’d expect some overhead from the hypervisor that might slow down our system and sacrifice power or performance on the altar of virtualization.  Fortunately, this is not usually the case.  Some vendors (such as Open Kernel Labs) tout performance of Linux on their hypervisor that rivals (or in some cases surpasses) the performance on native hardware. 

Virtualization does have its issues, however.  In many approaches, for example, memory is hard-partitioned between virtual domains.  This can bloat the total memory requirement in your system, as each partition must be prepared with the maximum memory required, since sharing across partitions is not possible.  Also, since the goal of virtual partitions is to prevent unintended (and/or hostile) communication between tasks, getting the communication you actually want between tasks can be tricky.  You end up having to subvert your own security and stability measures to get your job done.  Finally, multi-tasking a processor or multi-processing a task is a complex scheduling task.  Trusting that task to a thin virtualization layer can give you sub-optimal results if you are trying to do performance-critical programming.

A final advantage of virtualization is the protection of your code from obsolescence by the ever-changing landscape of processor and system architectures.  If your code is running successfully on a particular hypervisor and that hypervisor is ported to, for example, a new multi-core processor, you essentially get ported to the new platform for free (barring any real-time performance issues, of course).  In systems where software is shared across a variety of products with different underlying compute architectures, virtualization can dramatically reduce platform-specific porting headaches.

Companies like VirtualLogix, Open Kernel Labs, and Tenasys (who makes hypervisors for real-time Windows variants) are betting a lot on the adoption of virtualization by the embedded community.  Already, devices like mobile phones (which are typically far down the adoption curve with long development cycles) are showing up with virtualization as a key enabling technology.  It probably won’t be long before virtualization is an integral part of most embedded system designs.

Leave a Reply

featured blogs
Apr 7, 2020
Have you seen the video that describes how the coronavirus has hit hardest where 5G was first deployed?...
Apr 7, 2020
In March 2020, the web team focused heavily on some larger features that we are working on for release in the spring. You’ll be reading about these in a few upcoming posts. Here are a few smaller updates we were able to roll out in March 2020. New Online Features for Ma...
Apr 6, 2020
My latest video blog is now available. This time I am looking at the use of dynamic memory in real-time embedded applications. You can see the video here or here: Future video blogs will continue to look at topics of interest to embedded software developers. Suggestions for t...
Apr 3, 2020
[From the last episode: We saw some of the mistakes that can cause programs to fail and to breach security and/or privacy.] We'€™ve seen how having more than one program or user resident as a '€œtenant'€ in a server in the cloud can create some challenges '€“ at leas...

Featured Video

LE Audio Over Bluetooth with DesignWare Bluetooth IP

Sponsored by Synopsys

The video shows the new LE Audio using Synopsys® DesignWare® Bluetooth 5.2 PHY IP and Link Layer IP with isochronous channels, and ARC® Data Fusion IP Subsystem with ARC EM9D Processor, running the LC3 codec supporting LE Audio.

Click here for more information about Bluetooth, Thread, Zigbee IP Solutions