Three thousand dollars is lot to pay for a radio.
A friend of mine recently bought herself a nice new car. Not Rolls-Royce or Ferrari nice, but more in the Mercedes/Jaguar/Lexus category. And one of the optional upgrades she decided to spring for was a $3000 “Premium Comfort” package. Being both an engineering nerd and a car nut, I was curious about what actually went into this $3K bundle of goodies.
From what I could tell, it was mostly just firmware upgrades for little things like the keyless entry, cruise control, or GPS features. She wasn’t paying for any actual hardware, just for bits. The only tangible item in the whole option package was an upgraded radio, which probably cost the automaker about $75 in extra hardware. So by implication, the firmware upgrades cost my friend about $2925.
As we know, software can sometimes be expensive to produce, but it’s free to reproduce. So once the automaker paid for that firmware, it stood to rake in $2925 in pure profit from then on. Nice business if you can get it. It also helps explain why electronic add-ons are so popular with new cars now. We like ’em because they’re shiny and interesting, but the automakers love ’em because they’re extraordinarily profitable.
They also help to differentiate products that are increasingly hard to tell apart. It’s not your imagination: new cars really do all look the same, because they mostly are. Like PC makers, automakers work from a small pool of shared components. Chrysler, Fiat, Jeep, and Alfa Romeo all share the same mechanical underpinnings. Volkswagens, Audis, Bentleys, and Lamborghinis are all made by the same company (notice the dashboard instruments and switches). The differences lie mostly in branding and paint. And firmware.
What’s the difference between an Apple iPhone, an Android phone from HTC or Samsung, or a Windows phone from Nokia? It’s definitely not the hardware, because they all use pretty much the same processors, memory, and LCD screens. The only significant difference is the firmware inside, and by implication, the third-party software with which it’s compatible.
I remember the original iPod (remember those?), which I didn’t like very much at all. I already owned a competing brand of MP3 player, which I preferred. To me, the iPod was just an overpriced 3½-inch hard drive with a “Play” button on it. I thought it was hard to use and limited in functionality. Updating the content on an iPod meant going through the ordeal of iTunes, for starters. To this day, I still think iTunes is a miserable, awkward, and ungainly piece of software, and I’m surprised Apple apologists stand for it.
In contrast, my other MP3 player was both simpler and more useful. I could create and update playlists on it directly, something early iPods couldn’t do at all and even modern ones don’t do very well. I didn’t have to store a complete duplicate of my entire music library on my PC, which seemed like a complete waste of hard drive space—and something iTunes still requires. And I could play more audio formats than the iPod supported. The iPod was inferior in every way, although the market clearly disagreed with me. (It now lives in the glove compartment of my car, where it serves as a glorified CD changer.)
In the wide world of computer hardware and semiconductors, the two most profitable types of components are microprocessors and FPGAs. And what do those two chips have in common? They’re programmable. There aren’t many people who design microprocessors or FPGAs (Hello, Xilinx, AMD, Altera, and MIPS readers!), but there are a lot of engineers who program them. And more to come.
Modern EDA tools were supposed to make it easier for us to design custom hardware. Futurists predicted an industry where every product was designed for “a market of one,” customized for each specific user. The EDA part succeeded—it is a lot easier to design custom chips. But EDA is only part of a chain of value that includes semiconductor manufacturing, integration, distribution, marketing, and more. All of those things are still difficult and still expensive. Few companies can afford the time and expense of designing their own chips, no matter how easy it is, which it’s not. Instead, they use standard (processor) or semi-standard (FPGA) chips and customize the software. It’s a whole lot easier to revise code than it is to spin an ASIC. Cheaper, too.
Futurists are also predicting the rise of the “content creators” as the big economic drivers in the coming years. As nearly as I can make out, that means durable manufacturing is on its way out, and content, in the form of movies, books, and art—software, in other words—is trending up and to the right on the economic charts. Makes sense to me.
Hardware engineering is like architecture; software is like poetry. You need expensive tools and an expensive education to be an architect. You can’t simply wing it and hope your buildings stand up. But anyone can try writing poetry; it doesn’t fall down if it’s amateurish (not in the literal sense, at least). In economic terms, poetry and programming both have low barriers to entry. Anyone can try, and those with the talent and inclination can make a career out of it. You can’t really do that with architecture or ASICs. Those need to be right the first time and correct by construction, and that doesn’t lend itself to dilettantes’ idle fiddling.
Short product cycles push us into software customization. Cheaper development costs favor software over hardware. Changing standards reward software solutions. User preferences lean toward software differentiation rather than hardware details. And programmable chips provide the basis for useful customization. Plus, it’s just plain easier to get started on a programming career than one dependent on million-dollar EDA licenses, bench instruments, and months-long turnaround cycles. All of which means that programmers will be creating and defining the products to come. Gentlemen, start your compilers.