feature article
Subscribe Now

Can You Get Sued for Bad Code?

Legal Liability is Still User-Defined in the World of Software

“Every man believes he knows how to do three things: drink liquor, woo a woman, and write.” – anonymous

You’ve just installed a shiny new multimillion-dollar computer system used to dispatch ambulances in a large, metropolitan city. You and your colleagues have spent years developing the software, which allows first-responders to efficiently locate the nearest ambulance in any part of the city. When an emergency call comes in, your computer immediately pinpoints the nearest ambulance, alerts the dispatcher, and sends help on the way. It’s a triumph.

Except that it doesn’t work. In fact, it’s so bad that experienced dispatchers would rather use a pencil and paper instead of your horribly buggy system. Rather than speeding up response times, ambulances are getting delayed or misdirected. Some aren’t even dispatched at all, long after the emergency call has come in. These unexplainable delays have probably cost lives. And it’s your fault.

Worse yet, this isn’t a hypothetical scenario. It really happened, in London in 1974 – and again in 2006, and again in 2011, and again in 2012. The London Ambulance System’s repeated computerization efforts were spectacularly counterproductive and plagued with problems and bugs and crashes, all apparently caused by poor programming.

Whom do you sue over a catastrophe like this?

Are you, the individual programmer, at fault? Is your team of fellow programmers liable? Or does your employer take the heat for your group failure? Is there insurance for this kind of thing? Does somebody lose their job, or drive the company into bankruptcy? Who, exactly, pays for this fiasco?

As with most legal issues, there’s no clear answer. But a lot of smart people are working hard to try to draw the black-and-white lines that define a programmer’s liability. If your code costs lives, should you be allowed to keep coding?

We accept that some software flaws are probably inevitable, just like there are occasional flaws in steel or iron. Once in a while you get a bad Ethernet cable, or a bad shrimp at the restaurant. Mistakes happen. It’s not a perfect world.

But software is always created by someone; it’s not a random natural variation in raw materials. And there are ways to guarantee – or, at least, to improve – the chances of creating reliable software. All too often, though, we ignore the rules, skip the classes, or skirt the procedures that help us generate better code, even though we know we shouldn’t. So who carries the ultimate responsibility when our system crashes – literally – as in the case of a car’s faulty drive-by-wire system?

Set aside for a moment the impenetrable thicket of actual laws in this and other countries; how do you think this should be handled? If you could wave a magic wand and declare the law of the land, how much responsibility would programmers bear for their own creations? Does it rest with the employer or with the individual? And how much buggy code is okay – “it’s just one of those things” – versus actual, legal liability? How bad does it have to get before someone can sue you?

There’s a good article on this topic in The Journal of Defense Software Engineering. In it, author Karen Mercedes Goertzel points out that we don’t even know what laws cover software. Is it a product or a service? The distinction makes a big difference to the applicable laws. Software is obviously intangible (CD-ROMs and USB sticks don’t count), so that makes it seem like a service. But it’s also a thing… sort of. Goertzel points out that software isn’t really sold; it’s licensed, so that makes it a service in the eyes of the law. But at the same time, software (especially commercial software) is delivered from a single vendor to a multitude of unknown customers in an open market, which makes it a product.

It doesn’t get any easier from there. If your code is faulty, you might be guilty of violating your warranty, even if there wasn’t much of a warranty to begin with. We’re all familiar with click-through EULAs (end user license agreements), which appear to absolve the software vendor of any and all liability, up to and including nuclear holocaust, zombie apocalypse, and loss of data. But apart from that, most software (again, especially mass-produced commercial software) comes with an implied warranty of suitability. Word-processing software is expected to, well, process words. A game is expected to be at least somewhat entertaining (No Man’s Sky excepted). There are certain baseline features that your software is expected to perform, even if you never say so anywhere.

That puts you under the jurisdiction of contract law. If your code doesn’t do what it’s supposed to do, you’re in breach of contract. That strategy has been employed successfully by plaintiffs to sue software vendors who delivered buggy products, or products that weren’t exactly buggy but that didn’t deliver the expected feature set.

This all sounds very squishy and unclear, and it is. But some situations are obvious no-nos. You can be sued for fraud if you knowingly mispresent or withhold information from a potential customer. For example, it’s not okay to hide the fact that you know about a vulnerability in your code, even if you intend to fix it later. This isn’t the same as saying you might have a vague suspicion that there might be bugs lurking somewhere; it applies when you already know about a specific vulnerability but choose not to disclose that fact. You’re also on the hook if you lie about inadequate testing. If your code hasn’t yet been through all the tests that you’d normally perform, you need to say so. If that testing later uncovers something that you could have found earlier, you could be liable for not disclosing that fact sooner.

Contracts, warranties, and misrepresentations typically limit your financial exposure to the price of the product. In other words, a simple refund. That’s good news for you. But lawyers often categorize software malfeasance under product liability, also called tort. Under tort law, you can be sued for a whole lot of money, far beyond just the price of the software or the system that runs it.

On the plus side, product liability applies only when there’s actual harm or damage, not just inconvenience or misrepresentation. On the negative side, there’s been actual harm or damage. Your software has caused real injury, so the stakes are high.

One insidious aspect of product liability is that you’re responsible for other peoples’ malicious actions. If a third-party hacker finds a vulnerability in your code and exploits it to cause damage, you’re answerable for it. In short, security is your responsibility, and lax security puts you on the hook for any fallout from hacking. Good luck with that.

Don’t panic just yet. Throughout all of this, lawyers rely on the concept of “reasonable care.” That is, you’re not expected to be perfect, just good. The prevailing legal view is that it’s impossible (or nearly so) to create perfectly secure and reliable code, so nobody’s handing out death sentences for every little bug. Some kinds of mistakes are okay. But you are expected to exercise “reasonable care” in writing your code, testing it, and ensuring that it does what it’s supposed to, while not also doing too much that it’s not supposed to. You need to be professional, in other words.

So… what exactly constitutes “reasonable care?” Nobody knows. Not every medical doctor is expected to save every single patient she sees – just the ones that a reasonably proficient and properly trained doctor could aid. Medical failure doesn’t automatically trigger a malpractice, and, with programmers, not every bug is grounds for a legal case. But beyond some hard-to-define point, a panel of experts might decide that your code wasn’t properly and professionally developed, and hold you liable.

To help clarify that situation, there are movements afoot to certify programmers, the same way that doctors, lawyers, and accountants are certified. On the one hand, that seems like an onerous and unnecessary intrusion into our business. Who wants to pass a Programmers’ Bar Exam every year? Or prove they’re staying abreast of all the latest developments in coding, debugging, and certification? What a nuisance.

On the other hand, certification confers certain benefits – like legal protection. If you’re officially certified, you’re automatically covered by “reasonable care.” You’re a professional, not a hacker, and your participation in a project means it’s a stand-up job done to professional levels of quality. Nobody can impugn your programming skills when you’ve got your certificate right there on the wall.

Software lawsuits aren’t going away, whether they’re under the doctrine of contract law, implied warranty, product liability, tort, malpractice, or fraud. Nor should they. Programming isn’t a special class of profession immune from liability, and coders aren’t above the law any more than plumbers, doctors, or heavy-equipment operators. Clarity is what we need. Clarity about the limits of liability, the ways to avoid it, and the remedies available when reasonable care is not exercised. Once we all know the rules, we can play the game on a level field.

27 thoughts on “Can You Get Sued for Bad Code?”

  1. Clarity ONLY comes from citable court precedence, so YES, litigate this extensively, and appeal it in all cases, so the established case law becomes clear.

    And at the same time, we also need to clearly define the users responsibility to make sure they are using full due diligence in choosing and testing software actually fits their need prior to deploying it.

    In the London case, who was the manager that signed off on pre-deployment testing before taking the product live, and fully accepting the product as contractually conforming? The article doesn’t say if this software was written in house. Or was custom written in response to a multi-bidder RFP and awarded to the chosen bidder with full specifications for the deliverable. Or was an established product in use in other areas for this purpose, and was adapted with custom changes for this locale? Or the service purchased a generic package on the market, and adapted it in-house with in-house software skills?

    I know that the manager that signed off on putting that software in production, would like to point fingers outside the organization to skirt liability for the organization … but best practice does state that the customer must make the decision when the product is ready for live field use, and NOT release the product for live use until fully tested, and locally approved.

    As I related in another piece, good practice does require deployment of new software systems in parallel with the old system (manual or automated) for testing, leading to full production release. This isn’t new … it’s been best practice for critical systems for over 40 years.

    In every case, it’s never about individual programmer liability … it’s about organizational liability, starting with senior management that allows deficient systems to be deployed … both at the customer, and at the vendor doing the development.

    Just as with hardware, it’s not the hardware design engineer that is responsible for product liability … it’s the engineers employer … starting with senior management that failed to establish best practice for design reviews, in-house testing, and manufacturing testing, and not shipping the product unless it is acceptable.

    EE Journal would like to equate Doctors with engineers. In most cases Doctors own their business, and have to individually (or as a clinic partnership) purchase their own liability and malpractice insurance. Their salaries/fees/compensation reflects these costs. Same/similar for Licensed Structural Engineers that sign-off on high liability projects. In general, product development engineers (Electrical, Mechanical, Software, etc) rarely are required to have product liability insurance as a condition of their employment … and their significantly lower salaries reflect that.

    As a contractor, there have been several times the client has request that I provide product liability insurance for THEIR product … the most notable was a large MRI company, where the product liability insurance costs were staggering. My answer has always been the same … the client must specify both the insurance company and policy terms, and that I would pass the insurance costs directly back to the client as “Cost plus 20%”. In every case the clients legal team decided they didn’t need duplicate insurance and legal costs, and provided a full indemnity for the project.

  2. @Kevin – this is the third article in the last month you have released for publication directly singling out, and calling in to question the professionalism of software engineers.

    And again, each point that is directed specifically at the profession of software engineers, has equally important equivalent parallels for each and every other development engineering profession. The assertion has been directly that these other “real engineers” do not allow similar mistakes, yet I continue to quote equivalent product failures.

    You continue these attacks against software engineers, COMPLETELY IGNORING that in every cited case, it was a senior management failure that allowed the products cited to be released and deployed.

  3. When a large commercial office building is filthy because the janitorial staff is only given 1 hour a week to do a 20 hour a week job, it’s the managements failure to properly fund and staff the janitorial team.

    When a large commercial office building is filthy because the janitorial staff is only working 1 hour a week while being paid 20 hours a week for the job, it’s the managements failure to properly supervise, reprimand, and replace non-performing janitorial staff.

    When a large commercial office building is filthy because the janitorial staff lacks experience, or does not understand the job expected of them, it’s the managements failure to hire skilled janitorial staff, give them clear guidance for what is expected of them, and to properly train, supervise, reprimand, and replace non-performing janitorial staff.

    When a large commercial office building remains filthy, ultimately it’s the management chain all the way up to the CEO of the tenant or building owner, that is failing to set and enforce acceptable standards.

    The same principles apply to ALL engineers (electrical, mechanical, industrial, structural, chemical, software, etc) in the work place.

    It’s a management problem when poor work product is accepted. It’s a significant management problem, all the way to the CEO and Board of Directors, when deficient product is sold to the public.

    Anyone attempting to scapegoat the janitors or engineers, is ignorant of the chain of responsibility, or has some other agenda that needs to be brought forward in the discussion.

  4. Engineering done right, does post-mortem analysis of failures, and feeds that back into the design process. Failing projects and companies fail to do this. Successful projects are mandated by management to ALWAYS do this.

    Jerry’s bullet list hit’s it right on the head … something we are taught in engineering from day one … and violated significantly in this project. Clear London Ambulance Service (LAS) management failure. LAS management failed to choose a qualified vendor. LAS failed to manage the vendors design process, and set up shared design approval with all internal and external stake holders. LAS failed to test the system in-house properly before acceptance, in parallel with the existing system. LAS failed to establish a deployment plan, with backup and project abort guidelines, ultimately placing multiple lives at risk.

    Is this typical of ALL engineering projects … NO for successful ones … yes, for nearly every project that fails, in all engineer disciplines.

    From projects I worked on at the same period, best practices were already in the work place and industry, that this project violated grossly. There are probably bigots that would say that was just British arrogance … but I’ve seen the same mismanaged project failures across disciplines in the US.

    From Jerry Saltzer’s MIT work doing post-mortem’s on a dozen failed projects, we teach students and professionals what to expect and look for. I learned this in the 1970’s … Jerry was obviously still teaching it in 1999 … in this presentation.

    Look at the other failures … classic, and well understood gross government bureaucracy failures.

    http://web.mit.edu/Saltzer/www/publications/Saltzerthumbnails.pdf…..

    London Ambulance
    Service
    Ambulance dispatching
    started: 1991
    scrapped: 1992
    cost: 20 lives lost in 2 days
    of operation, $2.5M
    • unrealistic schedule (5 months)
    • overambitious objectives
    • unidentifiable project manager
    • low bidder had no experience
    • backup system not checked out
    • no testing/overlap with old system
    • users not consulted during design

  5. I started my career in the late 1960’s doing cross architecture, cross platform, and cross language porting of applications while in High School and College for various educational institutions and commercial employers/clients. IBM 1401/1410, 360/370 DOS/MFT/MVT, CDC/Varian/DG/GE/Xerox/NCR/Burroughs/DEC, ASM/Cobol/Fortran/Basic/Algol/PL1/Forth/RPG, and a few other hardware/software architectures/languages/platforms.

    By 1973 I was doing fix price bids for most of my projects, and delivering most on schedule, or early, for the next 40 years. About 30% of the projects were original/new hardware/software/firmware development projects, the other 70% were short schedule rescues of failed projects, about half being complete redesigns around a different architecture or design specification that avoided the fatal flaws of the failed project.

    Nearly all of the failed projects we rescued were management failures driving poor design, staffing, process, schedule, and testing aspects of the project — either with in-house staff, or poorly managed external contractors. Frequently it was poor management of the development team, with flawed specifications and dynamically changing specifications that lead to team failures.

    In many ways, these mirror the LAS management failures to properly specify, design, test, and implement the project with multiple vendors, some poor, some experienced. In several cases, our clients previous vendor walked away from the project because of the clients failure to properly manage the projects hand-off, not that different than LAS’s failures managing their vendors hand-off’s.

    In most cases, it was the brutal realities of doing cross architecture/platform/language projects where tool chain and platform errors turn previously working code/subsystems, into many hundreds of subtle implementation errors that are very had to track down. Not that different than Kevin’s experience with new customers designs, turning up hundreds of errors in his teams tool chain.

    Good practice in tool chains, is to create an extensive set of very simple regression tests that probe all the edges of the language specification and feature set to validate up front that the language specification is faithfully implemented. This includes both the formal language, and the variations of bit/byte/word packing and endianess that the architecture supports. Failure to do so, leaves both the porting teams and customers chasing their tail over low level tool chain errors.

    Doing cross architecture and platform porting, is full of that — word sizes, big/little endian, byte/word bus order for both native objects and composite objects. Plus subtle bugs in the tool chain, for code and data sequences in the compiler, assembler, linker and loaders. Then add to that benign bugs on the original architecture that fail brutally on the new architecture/platform — array bounds, pointer bounds, un-initiallized variables, API changes, etc. Bugs in the I/O interfaces, in the memory allocation/mapping/paging, in library porting, etc.

    In hardware designs, most shops have a HARD, FIXED, NEVER TO BE VIOLATED rule of synchronous design, with a single state machine per clock domain, and full synchronization registers at clock domain boundaries. In the software world, that is equivalent to single processor, single task, no interrupts, no data sharing between independent tasks that isn’t mailboxed or messaged.

    Most complex software designs today violate that at every level. Fully re-entrant interrupt/exception threads, multi-threaded, concurrent shared memory, fully distributed co-operating processes/threads across multiple hardware/software architectures/platforms, rife with risks of race conditions, semaphore/locking problems, and dead locks because of these design mandates. Even some memory accesses are not even defined, so that update conflicts are last process/thread wins.

    The net equivalence in hardware land is fully asynchronous designs, with data/registers being shared between multiple state machines concurrently operating inside and outside a shared set of multiple clock domains. Yeah – race conditions and dead locks will happen without careful attention to timing, interlocks, and sequencing of control signals.

    A hardware designers management team protects them from that degree of complexity. A software teams manager tells the programmers to suck it up, and learn the craft, or find another job.

    Frankly, as a hardware designer, I’ve done several asynchronous projects … way fun, but you really have to do your homework and desk checking before committing the design. After working on OS ports, device drivers, memory mapping/paging, SMP processors, multi-core processors, and hybrid shared memory with distributed NUMA memory clusters … it’s actually a piece of cake.

    Senior Hardware designers completely freak out if it’s suggested that they do a fully asynchronous design with proper interlocks. Yet good systems programmers have been doing this at the OS level since the 1960’s, and good applications programmers since the 1970’s.

    We clearly let junior programmers, and even most experienced programmers, get their feet wet SLOWLY with parallel threads and distributed asynchronous systems because the race conditions and deadlocks are brutal to debug.

    So with all the insults from “real engineers” … I’m pretty sure I’m not the only hardware/software engineer that sees the “real engineers” as a bunch of coddled elites, that don’t even have a clue that the software engineers they attack are WAY above their pay grade.

  6. It is of course a bit more complicated than just blaming the software engineer. Engineering is always a team effort. Unreliable software doesn’t exist, but unreliable software engineering processes do. The issue is that most manager don’t know that as wel as many software writers (vs. a real software engineer).
    ARRL stands for Assured Reliability and Resilience Level. It has 7 levels. Most software is level 0, “use as is”. Level 1 means “trust as far as tested”. The problem is that software is often delivered without the test specifications. Level 2 means correct if no (HW) fault occurs. In other words, one needs formal proof. With level 3, the fault behavior is taking into account, etc.
    Now, just think for your self. How much of the software even meets level1? My guess is 99% if you accept that you don’t know what the software was tested for. And the argument “proven of in use” is only valid after e.g. 10 years.
    Conclusion: yes, its time for software writing to become a verifiable engineering discipline. Is it difficult? Yes, because the state space is exponential (in contrast with hardware where the space is combinatorial). Have fun.

  7. @Eric — good points and references … now let’s put that into perspective.

    Clearly there are known software processes that deliver products that have met the higher ARRL levels for decades … that were necessary for transportation (automotive, aviation, railway) and medical devices certification to meet at least Safety Integrity Level (SIL) level 3, if not 4 (device will not cause deaths).

    These standards are for those industries, with mandatory compliance for both hardware and software as a system. HARA (Hazard And Risk Analysis), FMEA (Failure Mode Effect Analysis). IEC?61508, DO-178/254, CENELEC 50126/128/129, ISO/CD 26262, IEC 62304, etc.

    FAA, AC 21-33 02/03/1993, Quality Assurance of Software used in Aircraft of Related Products

    Office of Device Evaluation, Center for Devices and Radiological Health (9 September 1999). “Guidance for Industry, FDA Reviewers and Compliance on Off-The-Shelf Software Use in Medical Devices” (PDF). U.S. Food and Drug Administration.

    Center for Devices; Radiological Health (11 May 2005). “Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices”. U.S. Food and Drug Administration.

    And as I’ve noted before, banking and finance, ISO/PRF 12812, ISO/TR 13569:2005, ISO 22307:2008, ISO/IEC TR 27015:2012

    Work in these industries, and violate the standards, and yes, somebody will sue you, or the regulators will fine you, or mandate your product be removed from the market.

    99.9% of hardware and software devices/products/services created/manufactured for consumers is neither designed to these ARRL and SIL standards, nor tested to these standards. That is probably true of most commercial/industrial devices too.

    Then the big question, do we mandate these testing standards on your home router hardware and software? On your home camera/babymonitor/etc hardware and software?

    The point is that will triple the device costs, or more … both for the hardware, and the software. It’s not trivial to provide these standards testing levels for the electronics components in most devices … or the software on top of it. In fact, nearly every electronic component comes with a significant legal disclaimer, preventing it’s use in any application where injury or loss of life may occur. So the hardware guys up front, clearly say we DO NOT meet these standards, do not hold us to them.

    Wanna have some fun reading about hardware quality … read this blog on disk drives, and failure rates between vendors … especially the blog entry on drive failures from the east after plants were damaged in a 2011 storm flooding, and shipped with reduced warranties because of contamination, reducing life spans to a year or less. And it was reported some of those drives failed with 40% still under warranty.

    http://www.backblaze.com/blog/what-hard-drive-should-i-buy/………

    http://www.backblaze.com/blog/hard-drive-reliability-stats-q1-2016/

    I hear an uninformed assertion that software should be held to these standards and costs … but not the cheap failing hardware it runs on.

  8. @Kevin, You said “Software engineering is a relatively new and immature art.”, yet it’s older and has seen the same rate of change as nearly every other engineering discipline since the 1950’s.

    Computer and software engineering starts in the 1950’s.

    Electrical Engineering (with the exception of relay switching systems) was purely analog prior to the 1950’s – with very limited real understanding of what we consider “digital” technology — mostly power transmission, early understanding of RF with analog modulation of transmission power.

    Printed Wiring Engineering, or Printed Circuit Board engineering — Prior to the 1950’s, everything was hand wired and soldered … even the early computers, radio’s, record players and televisions. Everything we use today, FR4 and better, all dates from the mid to late 60’s at the earliest, and most of the core technologies from the 1980’s.

    Semiconductor Engineering starts with single transistors in the 1950’s, but not really widely adopted until the early 1960’s. Consumer and industrial equipment are still nearly all vacuum tubes in the 1950’s, with a few novelty battery operated transistor devices.

    VLSI engineering gets a simple start in the early-mid 1960’s with the introduction of simple TTL devices with a few dozen transistors. By the end of the 1960’s we are seeing VLSI devices with a few hundred to few thousand transistors using NMOS/CMOS design. What we really consider VLSI didn’t happen until the mid-1970’s.

    Astronautical engineers got it’s kick off in the late 1950’s, but it was April 12, 1961 when the first Russian actually made it into space. But it took another decade to become a discipline in it’s own right.

    Materials engineering really didn’t exist prior to the early 1960’s, although it’s predecessor of metallurgy focusing on metal properties has been around for a lot longer.

    Nuclear engineering also didn’t really get it’s start until the 1960’s, with the prior weapons development being more weapons engineers working closely with physicists in top secret government labs. It wasn’t until this became useful in power and medicine did we see this become an engineering field.

    Systems engineering slowly developed in it’s own right as we appreciated the multi-disciplinary nature of complex systems following the space program of the 1970’s.

    Controls Engineering slowly developed in it’s own right for similar reason, with the basics being taught as controls systems theory classes during the 1970’s and 1980’s.

    Mechatronic engineering is another multidisciplinary field that that really has only gained traction in it’s own right during the last decade.

    Performance engineering is another multidisciplinary field that that really has only gained traction in it’s own right during the last decade.

    Bio-medical engineering is another field that has a long simple history, that with recent computer and technology advances has spawned more than a dozen new engineering disciplines. Genetic engineering, Tissue engineering, Clinical Engineering, Neural Engineering, Pharmaceutical Engineering, Rehabilitation Engineering, … etc

    … and there are another half dozen new comers, like Robotics Engineering, Microelectronic Engineering, Environmental engineering, etc …

    Even some of the old school engineering, looks nothing like it did in the 1950’s because of advanced modelling and advances in physics/chemistry.

    Aeronautical engineering has it’s basics rooted in practical technology development up to WWII, with some simple understandings of the basic principles with trial and error experiments into the early 1960’s. With computers becoming widely available in the 1960’s, the mathematical foundations finally became tractable, with simulations, and the real understanding of the field became possible, allowing design of supersonic jets and rockets.

    Structural engineering has similar roots from trial and error “best practices” up into the 1960’s. With the introduction of computer modelling, structural engineering combined with materials engineering, has expanded in scope by a couple orders of magnitude over the early days.

    Mechanical engineer has also seen the same breakneck advancements when combined with materials engineering, as models now can predict the materials strength and failure modes that took extensive trial and error before.

    So by the same standard that you dismiss Software Engineering, then you MUST also dismiss nearly every other current field of engineering, even with those that have roots prior to the 1950’s.

    What is it relatively new to??? When it’s been responsible for supporting nearly every significant advancement in EVERY OTHER Engineering discipline above? yes it may have some aspects of Art, just as the best practitioners of every engineering discipline above, reach past the play book of their field, and do exhibit their unique mastery of their field more as art, or even black magic, to the unskilled observer.

  9. @Kevin – let’s put this witch hunt against software engineering to a close.

    First, nothing is defect free. Every work product of an engineer (of every “real” engineer of yours) will have a long list of conditions where it will fail, or can be made to fail. After sorting that list from simple to most complex/costly, at the end of that list, is that the work product is unlikely to survive (continue to function past) the next big bang of our universe. Somewhere before the end of the list, is that survival of the work product after being cast into the core of our sun is unlikely.

    For every condition where the work product can be induced to fail, there is likely to be some engineering solution that may avoid that failure, at some (possibly significant, or insignificant) cost.

    Management makes a choice of how many least costly failure modes need to be removed from that list prior to releasing the work product into the market. Seldom is the engineer allowed to make that choice.

    The value of a product, is the income it will produce, less development and support costs. Management will optimize that trade-off to benefit the investors, owners, and the continued employment of the engineers, and other staff. At no time, will the work product be reasonably able to survive defect free every possible outcome.

    One measure of a work product’s reliability is the number of hours/days/months/years all instances of the work product will perform defect free over the products life. MTBF is one measure.

    If we look at every software product executing in our solar system today, and sum the number of lines of code executed defect free for a day, and also sum the number of software defects encountered in the same day, the reliability figure is something greater than 99.99999999999999%

    We get a similar number for every revolution of Internal Combustion Engine devices in our solar system, in relation to the number of ICE failures. Yes there are many thousands of ICE devices that fail each day, but the failure number is insignificant to the failure free device-hours/days/months/years of operation.

    What is important, is that every aspect of our lives depends on low failure rate execution of software, and that IS a reality today. Failures are rare, exceptionally rare in relationship to the error free lines of code executed every day.

    That is why software is successful in producing the supporting work products for every aspect of our daily lives. And every aspect, of every “real” engineers work products today.

    Remove all software executing in our solar system today, and we will be immediately tossed back into the dark ages pre-1950’s. Actually worse than that, because we will not even have functional communications left, as there are not enough POTS lines still in existence. Nor do we still have pre-computer industrial skills left in our work force. Nor do we still have enough pre-computer cars/trucks/trains/planes for transportation.

    Software engineers successfully implemented all the work products that run today with exceptionally low error rates, making our daily lives possible with technology today. Yeah there are defects, but rare as a percentage of lines of code actually executing defect free every second/hour/day/years of our lives.

    Anyone that slights software engineers contributions, is a bigot.

  10. @Kevin — I still believe you personally owe the software engineers in this readership an apology for the false claims in the three articles I’ve objected to clearly slighting software engineering error free contributions that you, the other authors, and a fair number of other bigoted engineers take for granted.

    The proof refuting those false claims is in the every existence of low error rate software in every technical product in our lives.

    The simple existence of this Journal is because of high reliability software engineering, with low defects … starting with the tools used to author your work, edit your work, distribute your work, and for the readership to access your work … all enabled with the low error rate software products that enable the businesses of each and every one of your advertisers.

    Open your eyes and start thinking … rather than repeating the bigoted worn out dissing of software engineers in the work place.

    If I’ve not made that clear in the above comments, and the comments of the other two articles, then clearly the definition of bigot really does fully and completely apply.

    http://www.merriam-webster.com/dictionary/bigot

    Definition of bigot

    : a person who is obstinately or intolerantly devoted to his or her own opinions and prejudices; especially : one who regards or treats the members of a group (as a racial or ethnic group) with hatred and intolerance

  11. Pingback: GVK Biosciences
  12. Pingback: Bdsm
  13. Pingback: orospu cocuklari
  14. Pingback: Boliden
  15. Pingback: sdnr.fogbugz.com
  16. Pingback: scr888 login
  17. Pingback: DMPK Studies
  18. Pingback: Aws Alkhazraji

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

Libby's Lab

Libby's Lab - Scopes Out Silicon Labs EFRxG22 Development Tools

Sponsored by Mouser Electronics and Silicon Labs

Join Libby in this episode of “Libby’s Lab” as she explores the Silicon Labs EFR32xG22 Development Tools, available at Mouser.com! These versatile tools are perfect for engineers developing wireless applications with Bluetooth®, Zigbee®, or proprietary protocols. Designed for energy efficiency and ease of use, the starter kit simplifies development for IoT, smart home, and industrial devices. From low-power IoT projects to fitness trackers and medical devices, these tools offer multi-protocol support, reliable performance, and hassle-free setup. Watch as Libby and Demo dive into how these tools can bring wireless projects to life. Keep your circuits charged and your ideas sparking!

Click here for more information about Silicon Labs xG22 Development Tools

featured paper

Quantized Neural Networks for FPGA Inference

Sponsored by Intel

Implementing a low precision network in FPGA hardware for efficient inferencing provides numerous advantages when it comes to meeting demanding specifications. The increased flexibility allows optimization of throughput, overall power consumption, resource usage, device size, TOPs/watt, and deterministic latency. These are important benefits where scaling and efficiency are inherent requirements of the application.

Click to read more

featured chalk talk

From Sensor to Cloud:A Digi/SparkFun Solution
In this episode of Chalk Talk, Amelia Dalton, Mark Grierson from Digi, and Rob Reynolds from SparkFun Electronics explore how Digi and SparkFun electronics are working together to make cellular connected IoT design easier than ever before. They investigate the benefits that the Digi Remote Manager® brings to IoT design, the details of the SparkFun Digi XBee Development Kit, and how you can get started using a SparkFun Board for XBee for your next design.
May 21, 2024
37,648 views