feature article
Subscribe Now

From Servers to Smartphones

Virtualization Invades Embedded

The rack of heavy-iron blade servers whirrs away, dishing up data and mashing myriad algorithms for hundreds of users throughout the corporate campus.  Most of them have no awareness whatsoever of the physical location of the computing resources doing their dirty work.  Day by day and week by week the blades and racks are removed, replaced, upgraded, and re-tasked.  One capacity increase is accomplished by stacking in additional blades, the next by replacing an outdated rack with a new, smaller one with fewer blades and vastly more processing power.

Those of us who have become accustomed to the abstraction of centralized computing resources barely give a conscious thought to the role of virtualization.  With roots going back to the mainframe monsters of the 1960s, virtualization is a well-entrenched technology separating the physical computer doing the work from the virtual machine we access for our computing needs.  We accept that our data is safe on a machine somewhere, and we go along with our business, visualizing a single repository for our data or a single processor doing our bidding.  We are vaguely, if at all, aware that virtualization software – perhaps something from VMware – is masking the mechanics behind moving all that responsibility around among various pieces of physical equipment.

Whip your advanced smartphone out of your pocket, however, and the idea of server virtualization seems far away.  The tiny ARMs  and microscopic memory resources on the system-on-chip that patches us through to Aunt Julie while pausing our Sudoku or Scrabble game and serving up a rich message snapshot from Sophie on her Hawaii vacation don’t bear much resemblance to the power-sucking, fan-cooled, heat-sink-havin’, Giga-flopping server stacks in the computing lab. 

Our little world of embedded computing is not so separate from servers after all, however, if you look at the recent trend toward virtualization in embedded computing.  The idea of abstracting away the specifics of physical hardware from applications and operating systems is finding widespread use in the embedded space.  The reasons for adopting virtualization technology are as diverse as the companies, tools, and software that service them, however.  To get a grip on embedded virtualization and the ways it is being applied, it might help to define a few basic concepts.

Most server installations depend on the concept of virtualization.  The original idea behind this was to allow multiple copies of a single operating system to run on top of the virtualization layer, where each instance of the OS behaved as if it were installed on a stand-alone machine.  This allowed multiple users to share a single piece of physical hardware with the appearance that each user had his or her own dedicated machine.  This same concept is used in embedded systems to create logical partitions that separate instances of the OS performing different tasks.  This separation simplifies the coding of individual applications.

In embedded systems, this concept is extended to many-to-one and one-to-many mappings of physical computing resources such as processors, memory, or storage to virtual resources.  In mobile computing, for example, security is a priority.  Hackers would typically try to install an application on a device such as a smartphone that would be able to access resources on the baseband side, gaining unauthorized access to the network.  To prevent such attacks, virtualization allows us to separate instances of user-level applications from protected partitions running critical system functions.

Beyond the idea of security – reliability also motivates virtualization.  Again, in devices such as smartphones, the critical operating functions of the system need to be protected from potentially buggy and unreliable applications from unknown sources installed by the user.  Additionally, some applications running on a machine may need to operate on an RTOS in real time, while others may run on applications operating systems with flexible time constraints.  Some virtualization layers such as VirtualLogix (formerly Jaluna) VLX allow this capability. 

Most embedded virtualization approaches rely on a thin abstraction layer – usually called a “hypervisor.”  In some instances, the hypervisor mimics the exact interface of the underlying hardware, and therefore client operating systems and applications can run transparently on the virtualization layer as if it were a native core.  In other cases, the hypervisor has a different external interface from the real machine, and therefore the operating system must be ported to run on the hypervisor.  In still other instances, the hypervisor mimics the external behavior of a different underlying hardware platform, providing emulation or cross-platform execution of client operating systems.

Of course, adding a layer between the physical hardware and the operating system inevitably sounds our performance alarms.  Intuitively, we’d expect some overhead from the hypervisor that might slow down our system and sacrifice power or performance on the altar of virtualization.  Fortunately, this is not usually the case.  Some vendors (such as Open Kernel Labs) tout performance of Linux on their hypervisor that rivals (or in some cases surpasses) the performance on native hardware. 

Virtualization does have its issues, however.  In many approaches, for example, memory is hard-partitioned between virtual domains.  This can bloat the total memory requirement in your system, as each partition must be prepared with the maximum memory required, since sharing across partitions is not possible.  Also, since the goal of virtual partitions is to prevent unintended (and/or hostile) communication between tasks, getting the communication you actually want between tasks can be tricky.  You end up having to subvert your own security and stability measures to get your job done.  Finally, multi-tasking a processor or multi-processing a task is a complex scheduling task.  Trusting that task to a thin virtualization layer can give you sub-optimal results if you are trying to do performance-critical programming.

A final advantage of virtualization is the protection of your code from obsolescence by the ever-changing landscape of processor and system architectures.  If your code is running successfully on a particular hypervisor and that hypervisor is ported to, for example, a new multi-core processor, you essentially get ported to the new platform for free (barring any real-time performance issues, of course).  In systems where software is shared across a variety of products with different underlying compute architectures, virtualization can dramatically reduce platform-specific porting headaches.

Companies like VirtualLogix, Open Kernel Labs, and Tenasys (who makes hypervisors for real-time Windows variants) are betting a lot on the adoption of virtualization by the embedded community.  Already, devices like mobile phones (which are typically far down the adoption curve with long development cycles) are showing up with virtualization as a key enabling technology.  It probably won’t be long before virtualization is an integral part of most embedded system designs.

Leave a Reply

featured blogs
Jun 14, 2021
By John Ferguson, Omar ElSewefy, Nermeen Hossam, Basma Serry We're all fascinated by light. Light… The post Shining a light on silicon photonics verification appeared first on Design with Calibre....
Jun 14, 2021
As a Southern California native, learning to surf is a must. Traveling elsewhere and telling people you’re from California without experiencing surfing is somewhat a surprise to most people. So, I have decided to take up surfing. It takes more practice than most people ...
Jun 14, 2021
The Cryptographers' Panel was moderated by RSA's Zulfikar Ramzan, and featured Ron Rivest (the R of RSA), Adi Shamir (the S of RSA), Ross Anderson (professor of security engineering at... [[ Click on the title to access the full blog on the Cadence Community site. ...
Jun 10, 2021
Data & analytics have a massive impact on the chip design process; we explore how fast/precise chip data analytics solutions improve IC design quality & yield. The post The Importance of Chip Manufacturing & Test Data Analytics in the Semiconductor Industry ap...

featured video

Reduce Analog and Mixed-Signal Design Risk with a Unified Design and Simulation Solution

Sponsored by Cadence Design Systems

Learn how you can reduce your cost and risk with the Virtuoso and Spectre unified analog and mixed-signal design and simulation solution, offering accuracy, capacity, and high performance.

Click here for more information about Spectre FX Simulator

Featured Paper

An FPGA-Based Solution for a Graph Neural Network Accelerator

Sponsored by Achronix

Graph Neural Networks (GNN) drive high demand for compute and memory performance and a software only based implementation of a GNN does not meet performance targets. As a result, there is an urgent need for hardware-based GNN acceleration. While traditional convolutional neural network (CNN) hardware acceleration has many solutions, the hardware acceleration of GNN has not been fully discussed and researched. This white paper reviews the latest GNN algorithms, the current status of acceleration technology research, and discusses FPGA-based GNN acceleration technology.

Click to read more

featured chalk talk

Meet the Latest Wireless Member of the DARWIN Family

Sponsored by Mouser Electronics and Maxim Integrated

May 21, 2021 -- Your next MCU needs to be more than just smart. It needs to be power-efficient, have ample memory, and industrial-grade security. In this episode of Chalk Talk, Amelia Dalton chats with Zach Metzinger of Maxim Integrated about the latest member of the DARWIN family with a new RISC-V co-processor.

Click here for more information about Maxim Integrated MAX32655 Low-Power Wireless Microcontroller