feature article
Subscribe Now

Penguins, Bees, Cathedrals and Wikis

The Changing Face of Open Source and Collaboration

This was going to be such a simple piece: a quick look at some of the developments in the use of open source in the embedded arena, a quick update on Eclipse, perhaps a comparison of the positive and negative aspects of open source compared with proprietary tools, and a final summary.

Unfortunately, I began to become more and more aware that the open source landscape is changing and that a new paradigm is appearing. The Oxford English Dictionary defines paradigm as “a pattern or model, an exemplar.” I am using it here, not in the PR sense of something vaguely new but, as I hope will become clear, in the sense that the changes occurring are already drastically altering the way in which the tools for developing embedded systems are created and may, in time, alter the systems themselves.

Let’s look at some straws in the wind:

Microsoft (one is tempted to say, “of all companies”) has set up an “Open Source Technology Centre” and, amongst a raft of other projects, announced this week that it is donating code to an Apache project.

Wind River’s Linux-related revenues in Q2 of their financial year 2009 were up 60 percent year-over-year, to $10,111,000, which was 11 percent of the total revenues for the quarter.

Start-up Imperas spent three years and an estimated $4 million in developing a virtual platform for software development — and then placed it into the public domain.

Virtually every supplier of tools for embedded system development and debugging now includes an Eclipse interface.

A maker of chips with embedded processors for applications in the consumer market has over 300 people dedicated to maintaining a Linux distribution for users of its devices.

Firefox is continuing to increase market share, while the number of add-ins created and donated for nothing grows rapidly.

Apple’s iPhone, after first resisting the add-in community, has now embraced them, and there is huge number of downloads.

MATLAB has attracted a huge community creating and sharing tools and routines, and the community pages on the MathWorks web site looks like more an open source community than a commercial web site.

The best selling MP3 album on Amazon’s MP3 store is also available for free private download under a Creative Commons licence.

Wikipedia has compared favourably with Encyclopaedia Britannica in an article in the leading scientific publication, Nature.  

MIT is making vast quantities of current degree course materials available on the web, for nothing.

Within the sector serving enterprise computing there has been significant growth of Professional Open Source Software (POSS) companies. One of these, Pentaho, which provides business intelligence systems, was co-founded by James Dixon. It is Dixon who has come up with the Bee Keeper analogy for these companies. Bees gather honey because that’s what bees do. The bee keeper provides the bees with hives (and may also provide support services to the bees – like feeding them in lean times). In return, the bee keeper takes the honeycomb, and sells the wax and the honey. The bee keeper may also add further value to the honey, such as turning it into mead.

Compare this to a POSS company that sells a commercial distribution of Linux. The programmers work on Linux, because that’s what programmers do. The POSS company packages Linux and sells it with support and professional services. The bee keeper may create add-ins, such as drivers, and, if playing the game, returns these to the community.

Now we get into some interesting debates. Two of the key terms in this arena are Free Software and Open Source Software. They overlap, but there are important differences.

Free Software, in the meaning coined by Richard Stallman and the Free Software Foundation, “is not like free beer, it is like free speech”. It is freedom software, software you can tinker with and which you can pass on to your friends. (And, by extension, you get the source code.) Free Software is also a crusade against the restrictive “licensing” of software, as exemplified by End User Licence Agreements (EULAs), which severely constrain what you are able to do to with the software. For some, the voice of the Free Software community can be a little shrill.

Open source also centres on the availability of source code, although the definition by the Open Source Initiative makes it clear that that source code is not the only element of the equation. (And, if the source code is available to a customer through download, that is sufficient.)

The enterprise seems to be embracing open source in a number of areas, though anecdotal evidence suggests senior people within companies might not realise, for example, that the MySQL database or the Apache server on which they are building their web sites and so their sales strategy, are open source products.

In both of these areas, projects are produced by collaborative effort, or by a company placing the source code of a product into the public domain. (Imperas, a company that is doing just this, was the topic of IC Design and Verification Journal earlier this year.)

More normally, as with Linux, one person, usually, sees a problem and begins work on solving it. He posts his ideas on the web, and others join in. The group input refines the specification and the use case, making the result more generally applicable. As different people develop code, others test it, usually by using it. Gradually, the code becomes more and more refined. Dixon cites as an example the Coverity static code analysis of a number of open source products in early 2006. The bugs were posted, and within days the community working on the projects had solved many of them. Today, if you look at www.scan.coverity.com, the list of products, where the open source community is fixing a significant number of bugs, includes PERL, tcl and Python. There are some products with zero defects outstanding. This is a strong argument both for the quality of open source tool products and for the responsiveness of the community in addressing issues and resolving them.

The difference between traditional and collaborative approaches was described by Eric Raymond as being the difference between a cathedral, usually developed under a master plan for specific functions, and the bazaar, which develops and serves a multitude of functions.

The collaborative approach is not limited to developing software — Wikipedia is a famous example — but large companies are using this approach to gather input from a wide range of sources to solve significant problems. Don Tapscott and Anthony Williams document several examples, from gold field exploration to pharmaceutical research, in their book, Wikinomics. These are all feasible only because the internet provides an infrastructure.

And the internet is also facilitating other methods of working: for example, “Cloud Computing”, where you can rent flexible computing capacity, both processing power and storage, when you need it. The resource, and the software running on it, can be shared by people in multiple locations. Alongside rack hosting and other techniques, it can mean that you don’t have to invest in massive server farms when attacking large problems.

A similar approach is useful using shared access to documents through Google Docs and similar services. (We use Google Docs to share information within Techfocus Media.)

We have already seen that “free” software is not “no cost” software. Neither is open source software. There will normally be costs involved in making the open source tools match a specific set of requirements, and there are companies evolving that are specifically set up to assist companies in understanding and optimising these costs. Embecosm is a UK company that is committed to help companies develop products using open source tools.

So what are all these different developments going to mean in the long run for the average embedded systems engineer? To be honest, I don’t know. At one end of the spectrum, where we are looking at high rel / safety-critical, it is difficult to see how the entire development stream can be certified, and where responsibility or accountability lies, if there are elements of open source within the tool chain. At the other end of the spectrum, where market pressures, whether real or perceived, are screaming for new versions every few weeks, if not sooner, then anything that can accelerate the development cycle is going to be welcome.

Over the next few years, there will inevitably be more and more collaborative efforts, both in tools and in application code. Two groups will benefit from these: the savvy engineer who adopts the results at the point where they are sufficiently stable to be useful and the alert “bee keeper” who commercialises the results of these efforts and provides support services to the users. And if I knew which ones were really going to take off, I would be investing in an apiarist’s suit.

Here are sources where you can follow up some of the things discussed above.

The cathedral and the bazaar:  www.catb.org

Open Source Initiative: www.opensource.org

Free Software Foundation: www.fsf.org

Wikinomics: www.wikinomics.com

Leave a Reply

featured blogs
May 21, 2022
May is Asian American and Pacific Islander (AAPI) Heritage Month. We would like to spotlight some of our incredible AAPI-identifying employees to celebrate. We recognize the important influence that... ...
May 20, 2022
I'm very happy with my new OMTech 40W CO2 laser engraver/cutter, but only because the folks from Makers Local 256 helped me get it up and running....
May 19, 2022
Learn about the AI chip design breakthroughs and case studies discussed at SNUG Silicon Valley 2022, including autonomous PPA optimization using DSO.ai. The post Key Highlights from SNUG 2022: AI Is Fast Forwarding Chip Design appeared first on From Silicon To Software....
May 12, 2022
By Shelly Stalnaker Every year, the editors of Elektronik in Germany compile a list of the most interesting and innovative… ...

featured video

Synopsys PPA(V) Voltage Optimization

Sponsored by Synopsys

Performance-per-watt has emerged as one of the highest priorities in design quality, leading to a shift in technology focus and design power optimization methodologies. Variable operating voltage possess high potential in optimizing performance-per-watt results but requires a signoff accurate and efficient methodology to explore. Synopsys Fusion Design Platform™, uniquely built on a singular RTL-to-GDSII data model, delivers a full-flow voltage optimization and closure methodology to achieve the best performance-per-watt results for the most demanding semiconductor segments.

Learn More

featured paper

Introducing new dynamic features for exterior automotive lights with DLP® technology

Sponsored by Texas Instruments

Exterior lighting, primarily used to illuminate ground areas near the vehicle door, can now be transformed into a projection system used for both vehicle communication and unique styling features. A small lighting module that utilizes automotive-grade digital micromirror devices, such as the DLP2021-Q1 or DLP3021-Q1, can display an endless number of patterns in any color imaginable as well as communicate warnings and alerts to drivers and other vehicles.

Click to read more

featured chalk talk

Powering Servers and AI with Ultra-Efficient IPOL Voltage Regulators

Sponsored by Infineon

For today’s networking, telecom, server, and enterprise storage applications, power efficiency and power density are crucial components to the success of their power management. In this episode of Chalk Talk, Amelia Dalton and Dr. Davood Yazdani from Infineon chat about the details of Infineon’s ultra-efficient integrated point of load voltage regulators. Davood and Amelia take a closer look at the operation of these integrated point of load voltage regulators and why using the Infineon OptiMOS 5 FETs combined with the Infineon Fast Constant On Time controller engine make them a great solution for your next design.

Click here for more information about Integrated POL Voltage Regulators