During the Christmas break, I took time out from roasting an ox on the open fire, distributing presents to the assembled multitude of staff, chasing foxes across the rolling acres of Selwood Towers and feasting, wassailing and carousing to think about the past year and embedded technology stuff. I managed to overcome the urge and went back to roasting an ox etc, but, now the break is over, it seems worth having another think.
As I started working backwards through the articles I have written, I realised that it was just over five years ago (October 2006) that I wrote my first piece for the then fledgling Embedded Technology Journal – it had just celebrated its first birthday – based on interviews at what was the then dying Embedded System Show. So what has changed in the last five years and what has stayed the same?
I have teased out several threads from the tangled strand of well over a hundred articles. The first is multi-core, parallelism and related stuff. This is, of course, multiple threads, depending on the implementation. (That is a processor joke – I think.) People are still looking for a silver bullet to take legacy code and parallelise it, and nothing I have seen so far shows any sign of getting round Amdahl’s law. (The improvement from parallelising is going to be limited to the time needed for the largest sequential element of the program to execute.) And there is also a regular comment from all sorts of people that making things work in parallel is hard. Come on, the whole world is parallel and we cope. It is only because programmers have, on the whole, been taught to think linearly that they find thinking in parallel hard. (Although some implementations of multi-core devices seem determined to ignore all that has been learned in the last 30 years and are using architectural approaches that actually put obstacles in the path of the implementer.)
A lot of what I have written about has, in fact, been software related. Software has moved from being a minor part of an embedded system to a major part. For example, cars now may have millions of lines of code within them: by one definition a car is now a computer network with an engine, seats and wheels. But while, to a very large extent, hardware development is recognised as an engineering task, supported with an investment in tools and run as well-defined projects, software is seen as a creative art that doesn’t need tools and can be allowed to evolve. OK that’s a bit of a caricature, but when I talk to the guys who provide tools for managing software projects and ensuring that software is of good quality, they still find that the attitude from management down to programmers is that such tools are not necessary. “I am a good programmer; I write good code – what do I need tools for?”
In safety-critical areas this attitude is disappearing, and, in the next few months, you should look out for more announcements on tools that help companies in the long automotive supply chain to demonstrate compliance with ISO 26262. (ISO 26262 was finally released in November 2011.) And as well as 26262, DO-178, the avionics software standard, will reach DO-178C in 2012. The latest subset of C defined by MISRA is due out for public review any day now. If C++ rocks your boat, the C++11 standard was released in August 2011.
Software is a source of endless religious debates, often of the “How many angels can dance on a pin-head” variety. This is in part because the educational system and the needs of embedded developers are not aligned. One religious war is open source versus proprietary for compilers and other tools. In the last few years, positions have become entrenched on this one – it is like the trench warfare of World War 1.
On the open source front, the news broke, just as I was writing this, that the Linux community and the Android community may be getting back together. Android (Robots With a Sweet Tooth) was originally closely aligned with Linux and runs on a Linux core. The end of the year also saw the release of Android 4 – Ice Cream Sandwich – which aligns tablets and phones running Android into the same interface. (And, since CES in January 2012, TV.)
I have a strong interest in new companies, particularly European-based new companies, and I have written about them in a variety of articles. One company that interests me is XMOS. It is based on Professor David May of Bristol University’s research and real-world experience of developing parallel processing elements. (Declaration of interest – David and I both worked at INMOS in the early 1980s when he was implementing the transputer – a parallel processing element.) I went to see XMOS recently, to see how far they had moved on from July 2009. I was impressed – they have developed a rather attractive niche in the audio market. They are shipping in to the audiophile market – the one where people spend serious money on speaker cables, gold plated connectors, and filtered power supplies before paying huge amounts of money for amplifiers, speakers and CD players. They are also playing in the Ethernet AVB (Audio Video Bridging) space. (see AV Done Right. Finally.) AVB includes timing data so that signals reaching multiple speakers in a large space, for example, are synchronised. This technology clearly has wider applications and is one of the paths out of the niche and into broader markets for XMOS.
Safety issues were a big issue in 2011. Air France flight AF 447 crashed into the Atlantic in June 2009, and in July 2011 there was a detailed report including the transcript of the cockpit voice recorder. This was more widely disseminated in an article in Popular Mechanics in December 2011. It appears that while the trigger for the accident was the icing of sensors (a known problem – but it looks as though Air France failed to follow up on a warning), a significant factor in the accident was the pilots not recognising that the instrumentation was out of whack and not being able to cope with a very unfamiliar situation. It can be argued that this is a system failure that came about because the pilots, as part of the system, were accustomed to the aircraft’s complex controls responding in a certain way and were unable to understand that the situation had changed so that expected behaviour wasn’t happening. Fukushima had demonstrated that safety has to start at a societal level and can be culturally defined. (I discussed this last September in Lessons from Fukushima) So while safety standards have a role to play in developing specific objects, the real challenge over the next few years is going to be the interaction between the electronic/software system and the operator/driver/pilot together with the wider environment.
In July 2011 I looked at white space, the spectrum left vacant as TV analog signals are turned off. (“What Colour is Your White Space?” ) The intention is that this spectrum will be freely available to create wide area networks, with devices linking to a base station, itself hardwired into the Internet.
Since then, in the US, the FCC (Federal Communications Commission) has pushed forward. It has announced that, after a public trial, it has:
“… approved Spectrum Bridge Inc.’s television white spaces database system, which may provide service to devices beginning January 26, 2012. [It] has also approved a device by Koos Technical Services, Inc. (KTS) as the first product allowed to operate on an unlicensed basis on unused frequencies in the TV bands. The KTS device will operate in conjunction with the Spectrum Bridge TV band database.”
The database is necessary, as the spectrum feed is not continuous, and the fragments available vary from place to place. A device wishing to use white space has to register with the database and will be granted access to specific frequencies. The Koos device is the company’s Agility Data Radio (ADR), a software-defined radio, which is being used as the base station in a number of trials in the US and elsewhere.
There is interesting new hardware and software around. Xilinx finally gave their Extensible Processing Platform a name: Zynq. (I suspect that this might be a swearword in Basque.) So why I am talking about an FPGA? Because it isn’t an FPGA. It might be seen as an SoC rival, but I think it is probably best seen as a dual-core ARM Cortex-A9 micro-controller with a huge range of peripherals that are instantiated in programmable fabric. You should be able to build, (relatively) painlessly and (relatively) inexpensively, a device that matches precisely your project’s need for different peripherals and memory configurations, rather than compromising with the one amongst the hundreds of variants in a semiconductor company’s data book.
2012 will be interesting in many ways. I have argued that safety within systems is influenced by the society in which the systems are developed and deployed. If this is true, then the result of the US Presidential election, which looks like it will be the most heavily polarised election in American history, may provide a measure of how true this influence actually is.
OK guys (and gals) – what stuff do you think was important in the last five years?