feature article
Subscribe Now

From Servers to Smartphones

Virtualization Invades Embedded

The rack of heavy-iron blade servers whirrs away, dishing up data and mashing myriad algorithms for hundreds of users throughout the corporate campus.  Most of them have no awareness whatsoever of the physical location of the computing resources doing their dirty work.  Day by day and week by week the blades and racks are removed, replaced, upgraded, and re-tasked.  One capacity increase is accomplished by stacking in additional blades, the next by replacing an outdated rack with a new, smaller one with fewer blades and vastly more processing power.

Those of us who have become accustomed to the abstraction of centralized computing resources barely give a conscious thought to the role of virtualization.  With roots going back to the mainframe monsters of the 1960s, virtualization is a well-entrenched technology separating the physical computer doing the work from the virtual machine we access for our computing needs.  We accept that our data is safe on a machine somewhere, and we go along with our business, visualizing a single repository for our data or a single processor doing our bidding.  We are vaguely, if at all, aware that virtualization software – perhaps something from VMware – is masking the mechanics behind moving all that responsibility around among various pieces of physical equipment.

Whip your advanced smartphone out of your pocket, however, and the idea of server virtualization seems far away.  The tiny ARMs  and microscopic memory resources on the system-on-chip that patches us through to Aunt Julie while pausing our Sudoku or Scrabble game and serving up a rich message snapshot from Sophie on her Hawaii vacation don’t bear much resemblance to the power-sucking, fan-cooled, heat-sink-havin’, Giga-flopping server stacks in the computing lab. 

Our little world of embedded computing is not so separate from servers after all, however, if you look at the recent trend toward virtualization in embedded computing.  The idea of abstracting away the specifics of physical hardware from applications and operating systems is finding widespread use in the embedded space.  The reasons for adopting virtualization technology are as diverse as the companies, tools, and software that service them, however.  To get a grip on embedded virtualization and the ways it is being applied, it might help to define a few basic concepts.

Most server installations depend on the concept of virtualization.  The original idea behind this was to allow multiple copies of a single operating system to run on top of the virtualization layer, where each instance of the OS behaved as if it were installed on a stand-alone machine.  This allowed multiple users to share a single piece of physical hardware with the appearance that each user had his or her own dedicated machine.  This same concept is used in embedded systems to create logical partitions that separate instances of the OS performing different tasks.  This separation simplifies the coding of individual applications.

In embedded systems, this concept is extended to many-to-one and one-to-many mappings of physical computing resources such as processors, memory, or storage to virtual resources.  In mobile computing, for example, security is a priority.  Hackers would typically try to install an application on a device such as a smartphone that would be able to access resources on the baseband side, gaining unauthorized access to the network.  To prevent such attacks, virtualization allows us to separate instances of user-level applications from protected partitions running critical system functions.

Beyond the idea of security – reliability also motivates virtualization.  Again, in devices such as smartphones, the critical operating functions of the system need to be protected from potentially buggy and unreliable applications from unknown sources installed by the user.  Additionally, some applications running on a machine may need to operate on an RTOS in real time, while others may run on applications operating systems with flexible time constraints.  Some virtualization layers such as VirtualLogix (formerly Jaluna) VLX allow this capability. 

Most embedded virtualization approaches rely on a thin abstraction layer – usually called a “hypervisor.”  In some instances, the hypervisor mimics the exact interface of the underlying hardware, and therefore client operating systems and applications can run transparently on the virtualization layer as if it were a native core.  In other cases, the hypervisor has a different external interface from the real machine, and therefore the operating system must be ported to run on the hypervisor.  In still other instances, the hypervisor mimics the external behavior of a different underlying hardware platform, providing emulation or cross-platform execution of client operating systems.

Of course, adding a layer between the physical hardware and the operating system inevitably sounds our performance alarms.  Intuitively, we’d expect some overhead from the hypervisor that might slow down our system and sacrifice power or performance on the altar of virtualization.  Fortunately, this is not usually the case.  Some vendors (such as Open Kernel Labs) tout performance of Linux on their hypervisor that rivals (or in some cases surpasses) the performance on native hardware. 

Virtualization does have its issues, however.  In many approaches, for example, memory is hard-partitioned between virtual domains.  This can bloat the total memory requirement in your system, as each partition must be prepared with the maximum memory required, since sharing across partitions is not possible.  Also, since the goal of virtual partitions is to prevent unintended (and/or hostile) communication between tasks, getting the communication you actually want between tasks can be tricky.  You end up having to subvert your own security and stability measures to get your job done.  Finally, multi-tasking a processor or multi-processing a task is a complex scheduling task.  Trusting that task to a thin virtualization layer can give you sub-optimal results if you are trying to do performance-critical programming.

A final advantage of virtualization is the protection of your code from obsolescence by the ever-changing landscape of processor and system architectures.  If your code is running successfully on a particular hypervisor and that hypervisor is ported to, for example, a new multi-core processor, you essentially get ported to the new platform for free (barring any real-time performance issues, of course).  In systems where software is shared across a variety of products with different underlying compute architectures, virtualization can dramatically reduce platform-specific porting headaches.

Companies like VirtualLogix, Open Kernel Labs, and Tenasys (who makes hypervisors for real-time Windows variants) are betting a lot on the adoption of virtualization by the embedded community.  Already, devices like mobile phones (which are typically far down the adoption curve with long development cycles) are showing up with virtualization as a key enabling technology.  It probably won’t be long before virtualization is an integral part of most embedded system designs.

Leave a Reply

featured blogs
Apr 25, 2024
Cadence's seven -year partnership with'¯ Team4Tech '¯has given our employees unique opportunities to harness the power of technology and engage in a three -month philanthropic project to improve the livelihood of communities in need. In Fall 2023, this partnership allowed C...
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Secure Authentication ICs for Disposable and Accessory Ecosystems
Sponsored by Mouser Electronics and Microchip
Secure authentication for disposable and accessory ecosystems is a critical element for many embedded systems today. In this episode of Chalk Talk, Amelia Dalton and Xavier Bignalet from Microchip discuss the benefits of Microchip’s Trust Platform design suite and how it can provide the security you need for your next embedded design. They investigate the value of symmetric authentication and asymmetric authentication and the roles that parasitic power and package size play in these kinds of designs.
Jul 21, 2023
31,915 views