feature article
Subscribe Now

Making More of a Contribution

Increasing Embedded’s Influence on Linux

Peer review is a well-established essential component of the scientific process. For good science, anyway. The system provides a way for new ideas to be vetted and tested rather than being foisted immediately on a public that is all too willing to accept at face value the pronouncements of anyone calling him- or herself a scientist.

The same idea, albeit with somewhat different aims, plays a part in the open source world, where code gets posted and reviewed by peers in order to ensure that the code actually does what is claimed, has no bugs, is well structured, meets the needs of a variety of environments, and is robust and maintainable. Any such process is going to have an aspect of back-and-forth in it, where peers raise issues or questions and the authors respond either with changes, explanations, or defense of choices made. Hopefully with a minimum of snarkage and collateral personal destruction.

The Linux world provides formalized ways of doing this, with repositories for posting code until clean, at which point it becomes a candidate for inclusion in a future version of Linux on kernel.org, the “official” Linux distribution vehicle. Anyone can contribute code, but not everyone does. In fact, at MontaVista’s recent VISION conference, CTO Jim Ready pointed out that of the top contributing companies to Linux, only one from the embedded world (surprise – MontaVista) was in the top 30, at number 9, with 1.2% of the contributions. Everyone else was enterprise- or PC-oriented. It didn’t seem like Jim was using this fact to pat MontaVista’s back, but rather to suggest that the embedded community isn’t doing its part in improving Linux. And this can come at a real cost, since embedded systems have significant requirements that won’t necessarily be considered without staunch embedded advocates.

Because versions of Linux are available both from kernel.org and commercially, a key early project decision for embedded developers is which version of Linux to use and from which source. Kernel.org doesn’t claim to provide a commercial product, but rather the leading edge in features. They leave the tidying up to the commercial providers, who package it up into neat bundles, perform the usual product QA, and, not least, support the code. The question is, which software gets commercialized? Stuff that’s already been vetted thoroughly or the newest stuff?

This may sound like a dumb question, since clearly some designers are going to want access to the newest stuff as soon as possible. To this end, TI’s William Mills proposed a new model for getting new versions into the community more quickly. Historically, they’ve taken their DSP drivers and such and have worked with the commercial providers early, getting through the commercialization process, after which the software goes out to the community via their “davinci tree.” At this point it becomes visible for other engineers to play with and check out; once solid, it can proceed to kernel.org. The problem with this is that the public doesn’t see the code until after productization is complete: by the time Joe Engineer starts asking questions about the code, the poor dude that wrote the code has been working on a new project, having put a satisfying check mark by the old project. Now some self-important poser is asking questions about code that’s been duly repressed in memory, for release only in the comfort of the therapist’s couch. Dredging that up in any other environment is distracting at the very least, and it certainly makes for less speedy (and perhaps less cheerful) resolution of any issues raised.

The alterative proposed by TI, and endorsed by MontaVista, is getting the code out to the community as early as possible so that it can be reviewed more quickly while still near top-of-mind for the developers. This can provide speedier closure for those involved in the project. Meanwhile, the commercial entities can also pull from the community chest to start productizing the software. Hence their mantra: “community first, commercial complement.”

There are a couple practical challenges here. First, once the software goes out to the community, it starts changing as fixes and clean-up proceed. Presumably the commercial companies will pull at some point to start building the product, and they’ll be pulling from a moving target. Commercial availability could be delayed by a decision to hold off until the rate of change slows. While that could push things out a bit, it certainly would result in a more stable first release as compared with the current way of doing things, where the code doesn’t go to the public until the commercial company is done with it. So not really that much of a killer.

The second, tougher acknowledged challenge is one faced by designers as they decide which version to use for a project. Once you’ve locked onto a version, you really don’t want to change that midstream.  So if you pick a nice stable commercial offering, it will lag the latest features since, by design, work can’t start on packaging those features until they’re public. If you want the latest features, you have to pull from the public tree; now you’re working a bit more blind, without the benefit of the commercial infrastructure. A higher skill level is required, and there’s really no support.

What you’d like to be able to do is start, if necessary, with the public version to enable needed new features, and then transition over the commercialized embodiment of that version once it’s available. Doing that smoothly without running undue risk of injecting new bugs or suffering the delay of a new verification step is a problem that remains to be solved.

While this might at first seem to be something that would put a company like MontaVista at a disadvantage, since they can’t offer new features as quickly, they are in fact supportive because it has the potential to put embedded developers back into the process of contributing to Linux. Right now it’s all on the commercial guys because ultimately what ends up public is the commercialized version. By making the new stuff available to the world directly and earlier, the creators of that software – and the early-adopting developers that choose to work with and modify the pre-commercial version – will be able to contribute directly, without the MontaVistas of the world being intermediaries. Giving embedded developers more immediacy with respect to Linux can hopefully increase the amount of embedded contribution, adding some balance to what is now a pretty lopsided picture.

Leave a Reply

featured blogs
Apr 16, 2021
The Team RF "μWaveRiders" blog series is a showcase for Cadence AWR RF products. Monthly topics will vary between Cadence AWR Design Environment release highlights, feature videos, Cadence... [[ Click on the title to access the full blog on the Cadence Community...
Apr 16, 2021
Spring is in the air and summer is just around the corner. It is time to get out the Old Farmers Almanac and check on the planting schedule as you plan out your garden.  If you are unfamiliar with a Farmers Almanac, it is a publication containing weather forecasts, plantin...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

The Verification World We Know is About to be Revolutionized

Sponsored by Cadence Design Systems

Designs and software are growing in complexity. With verification, you need the right tool at the right time. Cadence® Palladium® Z2 emulation and Protium™ X2 prototyping dynamic duo address challenges of advanced applications from mobile to consumer and hyperscale computing. With a seamlessly integrated flow, unified debug, common interfaces, and testbench content across the systems, the dynamic duo offers rapid design migration and testing from emulation to prototyping. See them in action.

Click here for more information

featured paper

Understanding Functional Safety FIT Base Failure Rate Estimates per IEC 62380 and SN 29500

Sponsored by Texas Instruments

Functional safety standards such as IEC 61508 and ISO 26262 require semiconductor device manufacturers to address both systematic and random hardware failures. Base failure rates (BFR) quantify the intrinsic reliability of the semiconductor component while operating under normal environmental conditions. Download our white paper which focuses on two widely accepted techniques to estimate the BFR for semiconductor components; estimates per IEC Technical Report 62380 and SN 29500 respectively.

Click here to download the whitepaper

featured chalk talk

TI Robotics System Learning Kit

Sponsored by Mouser Electronics and Texas Instruments

Robotics projects can get complicated quickly, and finding a set of components, controllers, networking, and software that plays nicely together is a real headache. In this episode of Chalk Talk, Amelia Dalton chats with Mark Easley of Texas Instruments about the TI-RSLK Robotics Kit, which will get you up and running on your next robotics project in no time.

Click here for more information about the Texas Instruments TIRSLK-EVM Robotics System Lab Kit