feature article
Subscribe Now

Goodbye to All This

A Valedictory from England*

This is a part of my goodbye to an active role in the world of electronics, a world I have been a part of, as a PR person and a journalist, for nearly 40 years and, if you add on previous use of computers, including using teletypes on ARPANET, for nearly 50. In a few days, I will close down my commercial activities, and, although I will be looking at writing a book on an aspect of electronics history (inspired by the NaNoWriMo challenge my friend showed me recently), I will no longer be an active player.

I am going to use this as an opportunity to have a ramble through some of the things I currently find interesting or irritating in the industry and some of things I will miss, and let off a little steam.

My current most active irritation-trigger is the obsession with process geometries in certain marketing groups, in various chat rooms, and on some news sites. For me, the topic has as much relationship to reality as some of the questions on Quora: “Would a regiment of Orcs defeat a platoon of today’s Gurkha soldiers?” The only people who should really be interested in the process geometry are the poor souls who have to fight the boundaries of physics to get smaller dimensions for greater density. When they are finally within sight of a working process, the marketing geeks look around the industry, put a finger in the air, and pluck out a geometry number that is better than their competitors’ and, presumably, not sufficiently far away from reality that their engineers finally revolt. Then, many billions of electrons are bounced around as people pontificate on who is winning on a particular product front. An indication of how silly this has all become is the decision a couple of years back by the International Technology Roadmap for Semiconductors to give up roadmapping processes altogether. While it is fun to watch the latest and greatest, these have only limited use in very high-volume devices, and they obscure the value of older processes. Mentor, ARM and Imec have been pushing the idea that with low-cost tools (Mentor bought Tanner EDA’s low-cost portfolio) and old established process technologies (180 nm or even larger), you can use Cortex cores and proven IP, including analog and mixed signal, and build in fabs whose equipment has been written down so will charge very little, to get relatively small volumes of useful things into production very quickly.

This brings up another problem with chasing down the process nodes: the extreme costs involved. EUV (Extreme Ultra Violet) lithography systems, essential to continue chasing process shrinks, are finally entering the market but are priced at above $100 million per station. There are estimates that TSMC’s next plant, rumoured to be built to be 3nm compatible (whatever that means) will cost $20 billion.

People I have spoken to have expressed concern about TSMC’s dominance of the foundry industry. They are concerned that, with over 50% of the pure-play foundry business, TSMC is in too strong a position. Since their turnover is more than the next ten companies combined, there is little sign of a challenger.

There are aspects of manufacturing that are exciting. I think FD-SoI is a technology that has still to make the impact it deserves. Both Samsung and GlobalFoundries are working on extending the pioneering work of ST and LETI. GlobalFoundries is claiming 36 customers and more than a dozen tape-outs. NXP is shipping processors built on FD-SoI, and several start-ups are looking at it for their products. It is a much simpler manufacturing process than the Fin-FET approach being driven by Intel and many of the established companies, and it can be valuable in many low-power applications. Recent research by VLSI Research suggests that developers working on RF and mixed-signal projects are expressing interest in FD and that process-node junkies are reassured by a roadmap extending the 28 nm and 20 nm available today to 12 nm and beyond, even though there is no real comparison between FD and Fin-FET geometries. Earlier this month, Paul McLellan provided an interesting summary of the VLSI Research findings. (https://community.cadence.com/cadence_blogs_8/b/breakfast-bytes/posts/fd-soi-and-finfet-dan-hutcheson-re-runs-his-survey )

Also exciting is the RISC V open-source architecture. It is being adopted by a number of small start-ups as the basis for new products, and larger established companies are investigating it. In the latter case, it might be just to use RISC V as a bargaining counter in negotiations with Arm, but the smaller companies see the availability of an open-source architecture as a definite opportunity. Arm’s position has been that there is not a RISC V equivalent to their ecosystem, but that is changing with a flow of RISC V announcements from the established development-tools companies. In addition to  tools, silicon is also available from a number of vendors, and dev boards, including multi-core implementations running Linux, are opening up potential markets. My favourite at the moment is the work at CEA Tech in France to build a system with 1,000 RISC V cores.

Open Source software is continuing to make ground in the embedded business. A straw in the wind is the move by Amazon Web Services (AWS) to support and extend FreeRTOS and to provide a number of other tools, so that developers of devices for the IoT using AWS as their cloud server will be encouraged to use quality tools in their development. Who would have thought that an on-line bookseller would be developing IoT service and designing AI chips? Or that a search engine company would also be a major advertising channel, a supplier of AI chips, a supplier of one of the most widely used mobile operating systems of smartphones, and a mobile network operator?

Looking forward, the biggest problem is going to be software. Much of the exciting stuff – like quantum computing, artificial intelligence, and autonomous vehicles, relies on software. And a lot of today’s software is poor. Software guru Jack Ganssle states that “The average firmware teams ships about 10 bugs per thousand lines of code.”  That is even after the software has been “debugged”. We accept engineering disciplines in hardware design, but not in software. The recent profusion of over-hyped schemes teaching all kids to code is not going to solve the problem; it is merely inadvertently downplaying the professionalism of software engineering. Despite an enormous amount of evidence that investment in tools and good working practices reduces development time by cutting down on the time needed for testing and debugging, coders still show an enormous resistance and instead accept as normal that a significant amount of time will be spent in debugging – that is, removing the errors that the coder introduced. There is even an argument that, instead of working hard to ship bug-free software, you should ship software as soon as possible and plan to then ship patches. Now, where have we seen this model? Oh yes – much commercial software for the early PCs. Is this really an appropriate approach for AI, or autonomous vehicles? Chip designers have accepted that the task of laying out the transistors, once the core of their technical skill, is best done using tools. When are coders going to get there? Instead of recognising that tools can provide a route to faster development of error-free software, there is always an enthusiasm for the latest fashionable route to code nirvana. The Agile feeding frenzy is an example of this. The original Agile manifesto had a lot of shrewd thinking, but its most enthusiastic adopters chose the bits of the approach they found acceptable and ignored the rest. An example is the statement that values “working software over comprehensive documentation”. This is frequently interpreted to mean that there is no need to carry out documentation. I first heard this argument in the early 1970s, when it was “my code is sufficiently clear that it doesn’t need documentation”. His successor didn’t agree.

An example of how strict disciplines can produce amazing engineering is the Mars Rover Opportunity. Built for a 90-day operational life, it has now been on Mars for nearly 15 years. True, the software has been upgraded several times, but that this was even possible is a reflection of how strong the initial design was.

Tied into this is the increasingly important question of ethics. How many coders even know about the Software Engineering Code of Ethics and Professional Practice, published jointly by IEEE and the ACM? This applies to all software, but AI brings its own challenges. Despite the existence of Asimov’s fictional Laws of Robotics since at least 1942, there is still a lot of debate (there are multiple university centres of cyber-ethics) and not much action around creating an ethical code specifically for AI. It is also interesting that we are only just recognising that a lot of unconscious bias is being built into some AI systems. Much of the AI work is being done by men, and particularly men working in a mainly US style environment. (Those of us outside the US are well aware of the unconscious bias on many US websites.) This could well be a cause of issues when the AI system is working in a totally different culture. Just on a superficial level, most US/European-developed systems use the colour red as a sign of problems; in Chinese culture, red is associated with positive things.

Obviously, writers of code for criminal acts are not going to be bound by a code of ethics, but cyber-crime and cyber-warfare are going to be significant elements in the future, and companies and other organisations that are not preparing themselves to cope with this, by putting into place tools and monitors and thorough staff training, are going to face enormous problems. There will also be a constant balancing act between restraining the bad people on the web and maintaining freedom of expression. And AI-based tools are probably going to be the only way to stay ahead of the bad guys, especially if they are state-backed bad guys: there is ample evidence that at least Russia and North Korea are carrying out serious aggressive cyber-space activities (and I am not including any possible meddling with elections in either the US or the UK). Certainly, the US, the UK, and China have all publicly announced that they are looking at AI for protection and for counter-attacks, and Putin has hinted that Russia is, as well. There are concerns that we entering an arms race in cyberspace.

One thing I am going to miss is meeting, and keeping in touch with, the small companies who are building new things, like the clusters around LETI in Grenoble, France and Imec in Leuven, Belgium. The ones I know best are in the UK and include XMOS, a novel processor architecture now working on audio applications, UltraSoC, which provides IP for building analytics into chip designs, Graphcore, an architecture for AI, Ultrahaptics, which uses ultrasound to create tactile sensation in mid-air for an alternative human-machine interface, and Imperas, the creator of virtual platform technology for building software for embedded systems. I am also fascinated by companies in Germany’s Mittelstand. These are small and medium enterprises, often family owned, who are not seeking world domination but in their niche are technology leaders. An example is Lauterbach GmbH. From a small town in Bavaria, they are a world leader in debugging tools for embedded systems.

Also exciting, if slightly peripherally, is the work on graphene going on at the University of Manchester. After a slightly over-hyped period about five years ago, we are now in a period where solid work is going into finding applications for graphene in a wide range of industries. One set of projects was under the “Graphene Flagship” umbrella, one of the European Union’s biggest research efforts, which included projects on using graphene for electronics, photonics, biomed, composites, sensors, and energy – including batteries, large solar cells, and supercapacitors.

And that brings us to our last irritation button-push – the UK’s decision to pull out of the EU. This is going to be a serious problem for the small companies that I have mentioned, as well as for those electronics companies that are larger. To take an example that is a hot spot as I write: the EU is creating Galileo, a GPS system that is independent of the USA’s and Russia’s systems. Despite already having contributed over a £1 billion, ($1.2 billion), they have been assembling the payloads and are providing the sites for two of the control stations. Britain was one of the major influencers in the agreement that involved building Galileo and providing that access to aspects of it should be limited to EU members. Now, as Britain is leaving the EU, it is to be thrown out of the project and blocked from access to the services for governments, the military, and security services. In return, the British Government is suggesting that it will create its own system. Galileo has so far cost over £8 billion, and any competitive system will be at least that amount.

The UK car industry, a major and fast-growing consumer of electronics, is already pulling its horns in. Jaguar Land Rover is moving production of the Discovery SUV to Slovakia next year and shedding jobs in the UK. Nissan, Toyota, and Honda have established large manufacturing plants in the UK, designed to serve the whole of Europe, and they are expressing concerns about the future.

The only mitigating feature is that the British industrial policy, including attitudes towards electronics companies in other countries, is not being shaped by twitter messages sent in between bites of a Big Mac and sips of a Diet Coke.

I have enjoyed writing for Techfocus Media and EEJournal. It has given me the chance to meet interesting people doing exciting things. It has been great working for one of the last bastions of professional journalism, not just in electronics, but across the board. We don’t have a printing press, but we do strive to be objective (even in some of my comments above) and to report fairly. With all my worries about the future, I will miss being part of the electronics industry in its widest sense and EEJournal in particular. Good luck and good wishes to you all.

*Valedictory – unlike the American usage for high school graduations, in English usage, the term valedictory is used mainly when an Ambassador writes a summation at the end of his term in a country. This is my attempt at a valedictory as I leave the strange and fascinating country of electronics.

10 thoughts on “Goodbye to All This”

  1. I still object to your chosen metric as a blanket truth “The average firmware teams ships about 10 bugs per thousand lines of code.” This isn’t anywhere near reality for major code bases, and doesn’t reflect a proper discussion of errors in high value critical path production code over the life cycle, vs code bodies that may fail for 0.01% of the user population a few times a year, for features that have insignificant product value.

    This is NOT a software engineering problem … this IS a product engineering/marketing problem.

    Steve McConnell’s book, Code Complete , presents a different picture at Microsoft with “about 10 – 20 defects per 1000 lines of code during in-house testing, and 0.5 defect per KLOC in production.” And he notes NASA has projects with effectively zero errors per KLOC. NASA error free code is several orders of magnitude more costly than MS production code.

    As I’ve noted before, it’s always a management driven cost benefit decision when to ship a product … generally driven by critical “time to market” decisions, critical cost of support decisions, and critical customer acceptance level decisions. It’s rare for any company to ship products with severity level one errors that render the product nearly unusable, and the few that have did so because of some critical QA process or user expectation error. It’s fairly rare for any company to ship products with severity level two errors, that impact the operational use of the product for the majority of the user base. Both of these are typically screened during in-house testing, and a few are found when released to select alpha/beta external testers.

    As noted in previous posts, every industry has it’s cost benefit release thresholds … where errata are defined for functional failures and dependencies from the initial specifications. VLSI engineers routinely produce products where early customer availability parts have significant errors, and get better with each new tape out. And some of the errata never get fixed.

    ALL of these, are driven by management cost benefit decisions combined with customer satisfaction levels.

    Defects in production code, in production electronics, in production mechanical systems, are always a Management decision … including the decision NOT TO TEST IN HOUSE, and just ship AS IS. Management is responsible for funding QA programs. Management is responsible for setting standards for the company. Management is responsible for cost benefit decisions to ship.

  2. I was expecting this from you. It is not my chosen metric. It was, if you go back and read it, a metric from Jack Gansle, who has seen a great deal of software and measured it. You cite a reference to two organisations, Microsoft and NASA. Both of these should be producing good code, Microsoft because it learned the hard way that producing buggy products is not a good idea and NASA because when their code goes wrong it is very public and very expensive.
    But the vast majority of software is not produced by these two companies. It is written by small groups of determined individuals who would revolt if compelled to follow NASA JPL’s ten rules. It is this code that Jack, at management’s behest, has measured.
    If you regard yourself as a professional you can’t blame “management” You have to take responsibility:in fact that is what the Software Engineering Code of Ethics and Professional Practice, says you should do. Oh I forgot – you are not a software engineer.

  3. Dick — I actually have a B.S Computer Science from CalPoly San Luis Obispo.

    I also have a strong background in statistics, and fully understand false claims about industry practices that some bigots make against the software industry.

    First, NOBODY has a valid statistical claim that “most” or “ALL” (AS SOME HAVE CLAIMED) software in production has 10 lines of errors per KLOC. NOBODY. To do so would require a random selection of a statistically significant number of products, with a carefully standardized classification process to review these bodies of code. Self selected populations do not allow determining what “most” or “ALL” software is in terms of KLOC errors.

    Out of many hundreds of thousands of software products, can we issolate a few with 10 or more errors per KLOC, probably almost certainly, because there are with out a doubt some very buggy failed products out there. Is this true of the majority (51% or more) of software products code base that has been in production for more than a year, highly unlikely. Are there errata in most software products, just taking a guess, probably … just like there are errata in nearly every VLSI product shipping. What is acceptable for errata (both software and hardware) is a management decision, not an engineers decision. Engineers, both hardware and software that choose to ignore management and seek to remove all errata, are probably free to do so on their own time, and with their own personal budget for releases and tape outs.

    Just because there is an obscure bug in a section of code (or hardware logic) that doesn’t affect normal operation, doesn’t make the product … either hardware or software … unusable. And it’s this critical differentiation in errors that is ignored by blind determination errors per KLOC numbers.

    Factor the errors per KLOC into three bins … errors that affect normal operation that seriously impact usability, errors that happen infrequently that are an annoyance with little, to no, usability impact, and errors that have no impact on usability. I’m pretty sure that most products have none of the serious, possibly a few of the infrequent, and the no impact numbers only support stupid straw-man arguments. This is the management goal in setting level 1, 2 and 3 errors, and not releasing until all level 1 errors are fixed, and most easy level 2 errors … with most shops fixing level 3 errors only when convenient due to labor cost (product cost) concern.

    Now if you want to assert that the majority of software products in daily use have 10 Level 1 errors per KLOC, please stand up and show a statistically valid study. Ditto for Level 2. If you want to wave around a statistically valid study that says nearly all software products have 10 Level 3 errors per KLOC, go ahead … most of us will doubt that and spend a lot of time ROTFL. And of course … these are the same standards for VLSI and most other forms of engineered products … few are perfect in every respect, and few have huge fatal usability flaws.

    So … step up, and quote a statistically valid study that concerns Level 1 and 2 errors.

    Other wise, let’s apply the SAME standard to ALL VLSI, the majority of which has errata. And while we are at it, most other engineering projects too.

  4. There are lots of level 2/3 errors in most shipping electronics designs as well, in the form of glitches, unnecessary transitions the consume power, reset or meta-stability issues. They probably do not seriously affect operation, except for reduced battery life or higher heat production and random events requiring being power cycled to resume operation.

    The devices are still relatively stable and operational, but have a bunch of dirty little secrets that with a slight change in temp and/or voltage, may/will actually fail.

    My favorite of this class are SEU, ESD, surge, and brown out failures in consumer electronics … where you have to power cycle the device to resume normal operation. Either because it completely locked up, or entered some unstable operating condition.

    I’ve operated a rural wireless ISP since 1999 … and we still have a problem with both routers and radios, where we have to tell the customer to power cycle their equipment to clear a failure. And nearly all the equipment has CISCO and Motorola logos on the front. Clearly those product managers decided to ship, rather than being 100% perfect and reliable … and it’s not a software issue.

    1. And when these Cisco and Motorola devices are operated with an APC UPS they will run more than 700 days without being restarted … and rarely make it more than 200 days without, simply because of power line disruptions that cause the lights to blink too.

  5. And … I have the “Notify me of new posts by email.” box checked, and your reply didn’t generate a notification of your post.

    It seems the management of this publication didn’t think it was necessary to meet your 100% perfect software goal.

    Maybe because it’s poorly implemented on the server, maybe because the developers didn’t test Firefox, maybe for some other reason … any of which are a management issue, as the buck always stops at the top.

    1. everyone in the management chain makes cost/benefit decisions … the engineers rarely are granted that freedom to pursue 100% perfection with an unlimited budget.

  6. And Dick, the anti-programmer and software engineering bigotry, is about as valid as claiming racial bigotry is correct by quoting Charles White’s pseudoscientific justfications as proof.

    Claiming here your bias is correct, and using Jack Gansle as your proof, is equally bigoted.

    And yes, we have had this discussion before, as you claimed using similar proof that Software engineers are NOT REAL Engineers … to which I responded that nearly EVERY TOOL that all of engineering is based on today, was done by some Software engineer …. and those tools are largely 99.999999% correct in their daily operation … many trillions of lines of correct code executed every second, every minute, every hour, of every day supporting life as we know it today on this planet.

  7. And I switched from electrical engineering to computer science in my 3rd year because some important prof’s similar bigotry against digital solutions, claiming that their tried and true analog systems were faster, cheaper, and required fewer transistors than ANY digital system EVER would …. that “digital was a flash in the pan passing fad” (exact quote). There was some significant blow back, when I replaced my Picket 101-C with a newly released HP-35, that kinda sealed my fate. Real engineering often requires far more than 2-3 significant digits of accuracy and precision.

    I think history vindicated that decision. Their basic argument was largely correct on transistor counts, but they completely missed the boat on cause and effect trying to predict the future of our industry.

    Yes we still use the same core analog system level control models, but it stops about there for implementation.

  8. And for the record, Jack’s site has a lot of good info, along with some serious FUD in his self promotion, which is what Dick choose to assert as a global industry fact. We share a lot of similar perspectives for managing projects and development teams.

    Dick didn’t grow up inside this industry as a developer, or engineer, so it’s easy for him to fire cheap shots and miss information out for a role he has never worn the hat or shoes for.

    I at least have to give Kevin some respect, as he spent some time as an engineering manager … even though our views are very different. We have both managed development teams up to about 30 people at times.

    Over my 50 year career in the field, I’ve taken several dozen hardware/firmware/software leading edge development projects from concept to completion, under budget, and on schedule, mostly with fixed price contracts that where low ball bids with exceptionally short schedules, and high risk.

    To put this into perspective, we took pre-release early availability Motorola M68020 cpu/fpu chips from concept to delivered product, including design, pcb layout, debug, bios firmware, diagnostics, and a full UNIX V7 port in 3 months. Actually we did this twice, with a radically different architecture the second time, in the following 3 months, to meet customers engineering departments expanded design change requests. We did this months ahead of ANY another M68020 design completion.

    I have a long history of short schedule on time projects, using strict KISS development, and fixed price contracts to stop the customer from changing/expanding specs during the development. Want changes? Wait for phase two contract specifications.

    I’ve also done the roller coaster from five people developing a new leading edge architecture, to $23M in angel seed, followed by growth to 900 people and a $100M IPO 18 months later.

    http://web.archive.org/web/20020802031703/http://www.dmsd.com:80/jbass.resume.html

    http://web.archive.org/web/20020603185851/http://www.dmsd.com:80/dmsd.history.html

Leave a Reply

featured blogs
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....
Apr 18, 2024
Analog Behavioral Modeling involves creating models that mimic a desired external circuit behavior at a block level rather than simply reproducing individual transistor characteristics. One of the significant benefits of using models is that they reduce the simulation time. V...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured chalk talk

Advantech Industrial AI Camera: Small but Mighty
Sponsored by Mouser Electronics and Advantech
Artificial intelligence equipped camera systems can be a great addition to a variety of industrial designs. In this episode of Chalk Talk, Amelia Dalton and Ryan Chan from Advantech explore the components included in an industrial AI camera system, the benefits of Advantech’s AI ICAM-500 Industrial camera series and how you can get started using these solutions in your next industrial design. 
Aug 23, 2023
28,736 views