feature article
Subscribe Now

ESC Revisited

Connecting Dots in the Chaos

As promised, we want to bring you the best, most important messages from the show distilled down into punchy, trip-report-worthy text bites, suitable not only for framing, but also for cutting, pasting, and turning in to your boss as plagiarized proof positive that you were out there gathering the key information that will propel your company’s embedded systems development for the next year and, just maybe, justifying that $200 bottle of wine carefully camouflaged on your trip report. Remember, if you can’t spell, edit in a few typos – for realism – with your own typical mistakes. Our copy editor is really good.

—————————— cut here for boss-consumable section ————————–

This year, a number of technology trends were in evidence at the embedded systems conference. Starting on the software development side, the growth of Eclipse-based embedded development continued at an astronomical rate. There are now two major (generalized) camps in embedded software development – Eclipse and Microsoft. Eclipse seems to want to be the embedded environment for the rest of us – the people’s programming tools. Based on open-source development but powered by major engineering and IP investments by large (in embedded industry terms) embedded software companies, Eclipse promises to equalize the competitive landscape, reducing the competitive distance between giant, well-funded suppliers like Microsoft and many of the mid- and smaller-sized (in Wall Street terms) companies engaged in marketing embedded development tools.

The theory on the Eclipse side is that the “cost of entry” engineering work can be done once and shared in the public domain, allowing these smaller companies to get straight to the “value added” bits so they can make maximum profit on a minimum engineering investment. One issue, initially at least, is that Eclipse has subsumed a number of capabilities that were profitable businesses for many of these suppliers, forcing them to drop cash-cow products and move to support Eclipse-based tools offering the same capabilities. Overall, most of the players seem happy about the tradeoff, however, with vendors like Wind River investing both marketing and engineering capital in the effort at a convincing rate. Almost every supplier of embedded software tools now boasts at least some connection to Eclipse.

On the Microsoft side, the proposition is complementary. While the embedded software market may seem large to a lot of us, it’s peanuts compared with the current size of the desktop and enterprise markets dominated by the world’s largest software company. Their software development tools and methodologies are de-facto standards across most of the software-creating world, and embedded applications are just one small subset of the space spanned by their technology. If you’re starting a project and want to hire software developers, more of the world will be trained on Microsoft’s methodology than any other. One big disadvantage (when compared to Eclipse) is that Microsoft doesn’t have legions of open-source developers around the globe selflessly contributing their creations to the platform. On the other hand, one significant advantage Microsoft enjoys is that they don’t have legions of open-source developers around the globe selflessly contributing their creations to the platform. Go figure.

One notch down from software development tools is the highly competitive RTOS arena. Wind River, Microsoft, Green Hills, Mentor Graphics (repeat it three times; their embedded division no longer goes by “ATI”), Express Logic, Lynuxworks, and many others wrestle for seats and sockets with a variety of specialties and business models. To sort these quickly, there are two basic technical classes of embedded OS – hard real-time and regular, not-so-hard real time. There are also at least three basic business models – open source, proprietary royalty-free, and royalty-based proprietary (so far, nobody has cooked up much of a royalty-based open-source scheme). Beyond that, there are a number of footprints – small, medium, and large, and a number of specialty OS implementations that have qualified under various industry and government certification standards.

By the time you split up the available offerings into bins based on these criteria, you’ll see that there really isn’t so much competition after all. Most applications are clearly driven by requirements on each of these axes, and most of these points are served by one or two OS options at most. Now, if somebody would just convince the OS vendors of that, they could stop all the competitive positioning and focus their attention on bug fixing, performance improvement, and new features (in that order). That would be nice!

While we’re developing all that software on all those operating systems, we’re likely to create more than a few bugs. (It’s OK to let your boss read this part — he knows about the bugs already, and it adds to the realism.) A number of the vendors exhibiting at ESC are working to try to help us locate and prevent those bugs. The big problem with bug hunting in embedded systems is the diversity (and often unavailability) of the computing environment. With desktop-based software development, you drive down a wide, well-paved road where literally millions of programmers have gone before. The hardware configuration, OS support, development tools, and programming languages and IP have all seen countless projects like yours, long before you started. Everything is waiting for you in well-proven, orderly packages.

In embedded development, however, most of your work is in unfamiliar and unproven territory. You’re probably the very first to use the exact computing hardware you’re programming for, and a working model may not even exist yet. You may be doing your development and debugging on prototype hardware, in an emulation environment, or using a system simulator. You just know that those pansy, pampered desktop developers have no idea how easy they have it.

Jumping to the hardware side, AMD, ARM, Freescale, Intel, MIPS, TI, Altera, Xilinx, and a number of other vendors all have exactly the right processor for your next embedded application. Really. It turns out that the embedded world hit the power/performance wall before the desktop PC market did, so it was driven earlier to the myriad of multis – multi-core, multi-thread, multi-processor, and multi-just-about-everything-else that have been gaining steam (and making trouble for developers) for quite awhile now. The processor vendors can legitimately brag. Their part is comparatively easy, in reality. The tough part is the wrath they unleash on the embedded development side where compilers, operating systems, debuggers, and other mono-processor-minded products suddenly fail in the face of parallelized perplexity, creating engineering hassles and market opportunities aplenty.

The choice of processor is also probably fairly straightforward for many embedded design teams, based on considerations like legacy software and other standards that lock them into one architecture or one vendor’s processor technology. For those companies that have a choice, the lessons of the early microprocessor market should be heeded. The hardware is less than half the story. The development environment support is the deal. A few more megaflops won’t bail you out if your design isn’t done on time.

Another trend strongly in evidence at ESC this year is the incessant incursion of programmable logic (field-programmable gate arrays, or FPGAs). [For full coverage of that trend, see our FPGA Journal feature article on the topic.] FPGAs have a basic technology proposal that sounds simple but has complex implications – FPGAs make hardware soft. At their easiest-to-comprehend level, FPGAs are like poor-man’s ASICs – custom chips that you can program yourself. That simple mental model, however, belies the true power that programmable logic can bring to a system design – the ability to not only subsume just about the entire system onto a single chip, but also the ability to make that entire system soft, and often field-modifiable. The only true “hardware” left in a totally FPGA-based system is the FPGA’s connection to the board. Everything else, even those things traditionally considered as “hardware,” can be configured and reconfigured in the field.

This softening of traditional hardware components makes it feasible to produce very generic development boards that can perform just about any embedded system function under the sun (even, specifically, orbiting the earth). If you load up a board with a high-performance processor, some memory, an FPGA, and connections from the FPGA to everything, including the edges of the board, you have a basic do-anything, go-anywhere embedded system. Need to accommodate a special new I/O standard? Reprogram the FPGA. Need to transfer data between two incompatible devices? Reprogram the FPGA. Need to accelerate a compute-intensive algorithm into hardware to get supercomputer-like performance in your card-sized device? Reprogram the FPGA.

Choose the right development board and design software, and you can have a highly complex embedded system up and running in record time. Your only remaining task is cost-reducing your already fully-functional prototype. Life is good.

ESC is a veritable orchard of development boards. At any given moment on the show floor, the number of development boards and the number of conference attendees are probably in close competition – with the nod normally going to the boards. For the near future, then, the true contest in embedded systems may be a battle of the boards and of the tool suites that support them.

Leave a Reply

featured blogs
Apr 16, 2024
In today's semiconductor era, every minute, you always look for the opportunity to enhance your skills and learning growth and want to keep up to date with the technology. This could mean you would also like to get hold of the small concepts behind the complex chip desig...
Apr 11, 2024
See how Achronix used our physical verification tools to accelerate the SoC design and verification flow, boosting chip design productivity w/ cloud-based EDA.The post Achronix Achieves 5X Faster Physical Verification for Full SoC Within Budget with Synopsys Cloud appeared ...
Mar 30, 2024
Join me on a brief stream-of-consciousness tour to see what it's like to live inside (what I laughingly call) my mind...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Secure Authentication ICs for Disposable and Accessory Ecosystems
Sponsored by Mouser Electronics and Microchip
Secure authentication for disposable and accessory ecosystems is a critical element for many embedded systems today. In this episode of Chalk Talk, Amelia Dalton and Xavier Bignalet from Microchip discuss the benefits of Microchip’s Trust Platform design suite and how it can provide the security you need for your next embedded design. They investigate the value of symmetric authentication and asymmetric authentication and the roles that parasitic power and package size play in these kinds of designs.
Jul 21, 2023
31,014 views