feature article
Subscribe Now

Eschew the Real World

Imperas Thinks Software Should Be Designed More Like Hardware

“Never doubt that a small group of thoughtful, committed people can change the world. Indeed, it is the only thing that ever has.” – Margaret Mead

Imperas employs a grand total of ten people: eight Britons and two Americans. Yet the company is taking on an entire industry – indeed, an entire ethos and culture – and attempting to turn it on its head.

In short, Imperas thinks you’re doing it all wrong.

Writing software, that is. You’re doing it wrong. And you’re doing it badly. Your code is buggy, it’s not ready in time, it doesn’t do what you expected, and it costs a lot more to develop than you planned, and probably more than you’re aware. Speaking of which, do you even know how much you spent on code development for your current project?

That’s just sloppy. Imperas thinks that programmers, as a class and as a profession, could stand to learn a few hard lessons from their colleagues over on the hardware side of the house. Specifically, the SoC and ASIC designers. Now those guys have got their stuff together. You could learn a few things from them.

So what do the hardware guys do that the software people don’t? They simulate, mostly. They simulate the snot out of their SoC designs because it costs a million bucks if they get it wrong. And it delays the project by almost a year. And it generally costs a few engineers their jobs. So getting an SoC right the first time is an absolute imperative because, as Dr. Samuel Johnson observed, “When a man knows he is to be hanged in a fortnight, it concentrates his mind wonderfully.”

Thus, hardware design tends to be very formal and methodical. It’s structured. It’s testable. It’s simulated and modeled and verified out the wazoo. Code? Eh, not so much. Programmers tend to be of the ponytails-and-Birkenstocks variety, while hardware engineers hew closer to the Poindexter-with-eyeglasses-and-pocket-protector mold. Both disciplines are needed, obviously, but it sure would be nice if the code jockeys could produce more reliable, testable, verifiable software to go with that new SoC.

All well and good, but what is Imperas doing to make that happen? Glad you asked. The company provides models – lots and lots of models. The models are a key part (though not the only part) of a software-design methodology that Imperas believes can lead to better and more-reliable software. The company has models for more than 130 different CPUs, for example, including just about every different kind of ARM processor in existence, several generations of MIPS processors, the Synopsys/ARC architecture, Altera’s Nios, Xilinx’s MicroBlaze, the Renesas V850 family, and others. If you’re writing for any of those processors, Imperas should have you covered.

The idea is that you simulate your code running on a simulated processor with simulated peripherals and simulated APIs. Everything runs on a standard x86-based PC; the more CPU cores it has, the better. Imperas’s tools will translate your ARM, MIPS, or other binaries to x86 on the fly for simulation. As part of that translation, the tools also insert debug information so that you can watch your code and/or profile its performance. Since the entire thing is simulated anyway, the debug information doesn’t actually affect run-time performance at all. It’s the proverbial zero-footprint debugger. Whereas instrumenting or debugging real code running on real hardware will always insert some Schrödinger observability artifacts, simulated code doesn’t have that problem. Besides, debugging on real hardware requires… real hardware. Simulation lets the coders get started right away.

You can even swap out processor architectures midstream, if you’ve a mind to. There’s nothing to prevent your simulating your code on a MIPS processor one day and on a MicroBlaze CPU (or several) the next day. Want to see what effect upping the CPU core count would have? Go for it. Simulation means never having to wait for budget approval for new hardware.

For the programmer who doesn’t like to program, Imperas has bundled together complete kits of models and tools that simulate everything on several real-world boards. For example, you can get an ARM Versatile Express board-simulation kit, a Freescale Vybrid board complete with the MQX RTOS on it, a MIPS Malta board simulation, and others.

Are there downsides? Sure. For starters, simulation is slow by nature. Well, slower than real hardware, anyway. But speed counts only when you have real hardware to compare against. For teams who like to get a head start on their software, before the hardware is ready, simulation is about their only option. And, for what it’s worth, Imperas says its simulation is faster than everyone else’s. So there.

There’s also the dilemma of missing models. Yes, Imperas has hundreds of models for most any programmable device you could think of, but what about the other devices that aren’t modeled? Do you spend the time creating a model, or do you just skip it? Imperas has helped to define the Open Virtual Platform (OVP) standard that anyone can use to create reusable models, but, so far, the library of functions is limited to things that Imperas itself has created. Perhaps with a bit more time, there will be more third-party models, but today is not that day.

Then there’s the whole learning curve and tool-affinity thing. Developing code around models (or, at least, utilizing models as part of your development program) is new to a lot of coders. It’s alien. It’s foreign. It’s… almost hardware-like. For those prima donna coders who consider themselves artistes and above the quotidian concerns of schedules, budgets, and reliability, the whole Imperas methodology may rub them the wrong way.

For the ones who want to keep their jobs, however, it’s a good alternative to winging it.

Fixing bugs is almost too easy these days, and because of that there’s a tendency to ship the code now and promise to fix it later. Endless revisions are becoming a way of life. That’s an attitude that hardware developers can’t adopt; their stuff is too expensive. It’s only the programmers who can get away with endless tweaking – tweaking that’s done on the customers’ time, by the way. If what we need is solid code that’s correct before it leaves the factory, and not at some indeterminate time down the road, then software development will have to get a bit harder.

7 thoughts on “Eschew the Real World”

  1. This article probably gets a lot of laughs from less thoughtful arrogant hardware types.

    The stereotype BS is just another form of racism, that does NOTHING for building teams.

    The challenge is to get hardware types to accept broad undefined moving target software projects and do any better at designing and delivering a better product in a shorter time.

    My challenge to these clueless folks, is ok, next project, you do the software too. And then we can dissect your whining about specification changes, schedules, and quality of results on YOUR watch.

    I’ve got formal training in both EE and CsC … and worked both sides of the business at various times in my career … the bitching from both sides, is clearly the lack of respect and arrogance about a job they KNOW nearly NOTHING about. And frequently the same folks, do not even meet their own standards in their own field.

    For my own sanity, I left the corporate W2 salaried programmer world because it’s damaging to your career to tell the VP of Marketing and Sales “No” to another major specification change “after” development freeze because he wants a bonus on a major new sale. The hardware guys have a clear out, in that it’s a clear 3-9 month schedule slip to make changes after production release. Software teams are just told to get it done, even when it means 100hr/wk, or start looking for a new job.

    As a contract hardware/software developer, the customer signs off on specifications, and that is what you deliver. If they want changes, that’s a separate contract AFTER the project is delivered. In house software teams are frequently screwed into long hours, and late specification changes.

    Software is far more complex than hardware … in it’s size, and the shear amount of equiv gate logic embedded in the code. Compile a major software project into gates … where every if/then/else turns into a mux, and the resulting state machine is huge. The software for most projects has a logic complexity factor of several orders of magnitude more than the hardware it frequently runs on.http://en.wikipedia.org/wiki/Source_lines_of_code

    Now if you think you want to write formal specifications for a few billion SLOC to rid the bugs in the existing code bases, I don’t think there is the man power available to write those formal specifications, nor the man power to resolve the automated differences between the specifications and the existing code base. What’s easy for a few hundred or thousand lines of Verilog, isn’t nearly as easy for a few hundred thousand lines of software … or a few billion either.

    Or if you are really a hardware prima donna, the test vector necessary to fully qualify a few billion SLOC of software logic.

    Tell management that software projects need to be extended by a factor of roughly 10 to conform to rigid fully modelled specifications prior to implementation, with formal test vectors to formally verify for every decision path in the code. …. ummm does time to market really count?

    The last 1% and 0.1% of non-critical functional errors are really costly in dollars and schedules to remove. Our management world has grown to accept delivering 99.9% of the software product being functionally correct in 10% of the time … and is very willing to work on the last 0.1% of less critical functionality after release.

    That in the software world is a VERY important critical Time to Market tradeoff. In fact, every well done software release schedule in this industry has well defined testing cycles to prune away the critical functionality failures prior to release, with a fair list of next release errata.

    See the same thing in hardware and silicon projects … with “chip mis-features” that are documented in the data sheet errata, listing features to be maybe corrected in the next tape out. Or not.

    As a systems software guy, bringing up pre-release early availability CPU’s and speciality chips … some of this errata make life very difficult …. and some of it is simply never fixed with the features depreciated in the specification.

    So if the hardware prima donna want to assert their process is perfect, and doesn’t include fixes in later fab cycles after formal customer release … I’m certain we can fill several columns with the horror stories of the failed hardware development process that is asserted to be perfect above. And we can talk about their managements Time-to-Market choices in releasing a 99.9% correct silicon product.


  2. I do not really see the upside of simulating a CPU on another CPU for most projects (sure, there may be exceptions). Software development can easily be started before any hardware is available by just compiling the code to a hardware which is already available (such as standard Desktop PCs).

    Of course, if it comes to low level code and realtime operating systems, a desktop PC may not be ideal. However, a big part of the code is usually not very hardware specific and can developed on a Desktop PC. 90% of the hardware specific code can be ran easily on any development HW containing the same processor.

    One may argue that the simulation approach also covers compiler bugs. I agree on that but what about bugs in the models?

    I also do not agree on the argument that SW guys don’t do any simulations. Many embedded SW teams apply test driven development which implies writing automated tests for the software. I don’t see any difference to what the HDL guys are doing by writing selfchecking testbenches.

    I work in both fields and basically I use the same methodology for writing and verifying software as for writing and verifying HDL.

  3. I understand simulating a CPU for a complex SoC in an attempt to get early code development on low level memory and device interfaces … in particular to have bring up diagnostics done by the time silicon and boards finally arrive.

    As obruend notes, that’s really not that important for most projects as the code can be written and tested on other platforms with function stubs at all the hardware interfaces that sequence a test vector engine to simulate the expected/defined hardware interface. I’ve done full unix ports with MMU, paging, and basic device drivers in advance of hardware with this approach before the silicon and boards were available.

    What the hardware guys don’t understand about software development, is that it’s not ALL AT ONCE like chip or board bring up is in most shops. Good software development practices do not write a million lines of code before testing starts … every function is written and tested individually with step wise refinement along the system implementation path. This greatly reduces integration testing, and removes huge unknowns from integration testing.

    Hardware guys could learn a lot of lessons from this, by also doing their development with step wise implementation and testing. And shorten their time to market schedules significantly in many cases.

    I’ve used this in my own hardware development projects since the 70’s. In the “very old days” we wire wrapped prototypes and tested one functional circuit at a time. For simple projects that’s not that bad, but as complexity rises debugging wire wrap mistakes and failures becomes a nightmare.

    The next iteration of this incremental design and test strategy was to design the data and control paths that were obvious for the project, in the first few days of the schedule. Then layout a 99% complete PCB with those expected control and data paths for quick turn around. With PALs, PLDs, and FPGAs responsible for the control logic it’s pretty easy to identify the probable inputs and outputs, without really worrying about the actual logic function or state machine implementation at this early stage. The prototype PCB was then sent out to fab while we set to work on doing the PALs, PLDs, and FPGA designs. A couple weeks later we would have our 99% PCB, and could start populating and testing it one functional section at a time in the lab.

    Most projects would have a few dozen “blue wires” at the end of board bring up. We generally would order a dozen or two of these boards, and hand the rest over to production techs to populate, and apply the “ECO” blue wires, so that software, marketing, and key customers had early availability hardware months before final production release.

    Rarely would we need to make parts placement changes to the board, so production engineers could also do test fixtures for these early availability boards that would be nearly identical to the needed fixtures at production release of the project … something that also took a few weeks off the production release to customer delivery schedule.

    We could then take on design change requests, and roll a second PCB with the design changes and “ECO blue wires” incorporated for a second round early availability build about 6-8 weeks behind the first.

    This greatly takes the pressure of the software guys, as they have testable 99% hardware early in the development cycle. And it takes the pressure off the hardware guys, in that we can delay development and test of some of the higher risk IO sections after the core functionality of the board is in the hands of the software guys. Often by moving that functionality to a mezzanine board.

    How does this work in practice? One short schedule M68020 desktop server project back in 1985 went like this. Marketing asked for a critical new M68020 desk top product, to be competitive to a newly released X86 product, at X86 prices. A team of myself and five part-time college students took the marketing request, and mapped out the basic data and control paths in a day. Three days later we released a 4-layer pcb for fab, and set to work doing the PALs/PLDs and device drivers. Two weeks later we had PCB’s, and unmarked ES M68020’s and FPU’s from Motorola. Two weeks after that we had boards up, diagnostics functional, and were working on the UNIX port. 12 weeks from start we delivered 100% functional prototypes to Marketing with a full Unix port, in a desktop flexiglass case. 24 serial ports, SCSI Disk/tape, Floppy, 4MB memory.

    Marketing/Engineering asked for some design changes, and we rolled a very different version with a similar development cycle in a second 12 week period that was ready for production release. The second version was 30% faster, 20% cheaper, lower BOM, and easier to manufacture, shaving marketing’s target sales price by nearly 50%, with better functionality.

    In contrast, the same company’s normal engineering staff also took on the project, and with 20+ seasoned full-time engineers, and about the same in software staff, was unable to deliver in a year. What they tried to deliver exceeded costs by 3X, with excess unmarketable functionality, and higher manufacturing complexity and costs.

    Incremental test and development is a powerful tool for meeting or shortening time-to-market demands, that seems lost on most more rigid hardware types that are less agile, and are only comfortable with well defined, slow paced, development projects.

    A lot of KISS along the way helps too …

  4. Was checking out the STM32F7 Discovery Kit that was just released, and reading down through the documentation is, low and behold, errata …. and i remembered Jim’s rants about perfect 100% correct hardware design processes, that should be copied by 99.9% functionally correct software teams.

    Sorry STI … you are not the bad guys in this: DM00145382.pdf


    And in some other work done last year, We have Xilinx errata on CES silicon en183.pdf


    So Jim, I don’t buy the crap printed in your article above … to quote you “Fixing bugs is almost too easy these days, and because of that there’s a tendency to ship the code now and promise to fix it later. Endless revisions are becoming a way of life. That’s an attitude that hardware developers can’t adopt; their stuff is too expensive.It’s only the programmers who can get away with endless tweaking – tweaking that’s done on the customers’ time, by the way. If what we need is solid code that’s correct before it leaves the factory, and not at some indeterminate time down the road, then software development will have to get a bit harder.”

    So both silicon and board level systems have their errata too … so if your standard, that isn’t being met by hardware guys either, is to be perfect before ship, then we need to start looking at increasing BOTH hardware and software time to market by about 10X.

    And some arrogant hardware types need to tone down their hate speech about “For those prima donna coders who consider themselves artistes and above the quotidian concerns of schedules, budgets, and reliability…”

    It really comes down to a complete lack of understanding about each others processes …. and who is really just winging it as you say.

    Enough? … maybe a retraction of the hate speech?

    Or maybe it’s time just to start contacting companies that are paying for ads on this forum, and ask why they are supporting such divisive drivel.

  5. Enough?

    Like you, I’ve been both a programmer and a hardware engineer, managed other programmers, managed other hardware engineers, and managed teams of both at the same time. Neither discipline was ever perfect, naturally, and I can’t seem to find the part where the article says they are.

    (I also can’t find the inflammatory adjectives, unjustified accusations, and all-caps name-calling that I meant to put in. Darn.)

    However, I stick by my contention that, because it’s so much easier to fix software bugs compared to hardware bugs, programmers have a built-in escape hatch that hardware developers don’t have. That encourages them and/or their bosses to ship code before it’s ready. Sure, chips and boards have errata, but you don’t get to download patches every few days/weeks/months. Instead, you postpone shipping and the customer doesn’t become an unwitting beta tester. You get maybe one shot every year to fix bugs in an SoC, versus every few hours for many types of code. Who wouldn’t take that opportunity, when the boss is pushing to ship?

    One of software’s greatest qualities unfortunately also appeals to a product manager’s greatest weaknesses.

  6. Nearly all your gripes are strictly about the exceptionally poor management problems that few software shops actually have if properly managed, not a problem with rank and file programmers as a class, and profession as you state. So to call out ALL programmers for this failing is exceptionally over general, and nothing short of hate speech when examining the drivel you continue paint as nearly ALL programmers.

    You want to cite “Specifically, the SoC and ASIC designers.” as having it nearly perfect … ignoring the multiple tape outs (after first customer ship) and errata failing product specifications, which are EXACTLY the same problem on the hardware side, especially when they show up as in-field PLD/FPGA updates.

    I want to cite bank and other financial programming teams, aerospace programming teams, medical programming teams, flight and ground transportation controls programming teams, weapons and countermeasures programming teams, casino gaming teams, and many other hardware/software embedded product teams that actually get it right EVERY TIME too … probably with a MUCH higher degree of professionalism than most SoC and ASIC teams because there are both LIVES and millions of dollars at stake when these teams allow a mistake to be released into the products. Are these the programmers you want to condemn ” programmers, as a class and as a profession,” with your bigoted hate speech?

    Most of these software teams have to be compliant with strict industry reporting requirements for risks and failures, that can make or break an entire company if ignored.

    So if a manager is unable to staff their team with programmers from seasoned professionals that get it right, that is the managers problem, and company exec’s for not getting the right staffing. If the company exec’s and management choose to ship a 99.1% complete product because of time to market concerns, that is their business decision, as they are ultimately responsible to their customers and investors. It’s not the decision of “programmers, as a class and as a profession,” fault they have management making these decisions, or creating these faults in the team staffing.

    I can walk into most cube farms of these software developers, and there are going to be relatively few pony tails. When you get out into industry, at banks, major data centers, most large businesses, if you took their programmers and stood them up against a wall with a group of hardware engineers, it will be hard to tell the difference. Yeah in some shops there are outrageous personalities … on BOTH sides. Again your painting ALL programmers as a class, and a profession this way, is nothing short of hate speech in an attempt to declare programmers as somehow much less professional than engineers.

    A well managed software shop has strict design and pseudo code reviews (models), automated regression testing, and goal testing, built-in to the development cycle from the beginning … there are sloppy shops too … on both sides of the isle hardware and software. I’ve seen first run prototype boards that were declared fully checked, come back with so many errors they could not be corrected with ECO wires … and two weeks later rush boards came back with half the errors. So again, comparing sloppy software shops with perfect hardware shops is nothing of a comparison at all, except to continue to paint ALL software types, as a class, and as a profession as second rate engineers … total BS.

    When you declare that “Specifically, the SoC and ASIC designers” as a class, and a profession, have their stuff together I’m going to object that they don’t have it perfect either … and after several tape outs, still ship buggy product with sigificant errata …. exactly the same management problem trying to manage time to market.

    I will grant you that software bugs are easier for sloppy time to market driven management to ship …as you want to stick to as your point … but that is a management problem and IN NO WAY gives you the right to condem hundreds of thousand hard working, excellent software engineers with the hate speech you freely attribute to the ENTIRE industry of programmers as a class, and as a profession.


    You could have presented the Impress product without going on your vendetta against hard working programmers, that have shitty upper management changing requirements, schedules, and specification right up to the day the product ships.

    To go on your rant about prima donna coders is pretty strong language, when their are also prima donna hardware engineers with exactly the same failings.

    That was YOUR choice to paint ALL programmers with a broad totally FLAWED brush, as a class, and as a profession.


    When you demean “programmers, as a class and as a profession,” then you have exercised hate speech.

    hate speech


    Bigoted speech attacking or disparaging a social group or a member of such a group.


    transitive verb

    de·meaned, de·mean·ing, de·means
    To lower in status or character; degrade or humble: professionals who feel demeaned by unskilled work.

    The American Heritage® Dictionary of the English Language, 5th edition Copyright © 2013 by Houghton Mifflin Harcourt Publishing Company. Published by Houghton Mifflin Harcourt Publishing Company. All rights reserved.

Leave a Reply

featured blogs
Aug 15, 2018
Yesterday was the first of two posts about Cadence Automotive Solutions. Today we go down into the details a bit more. However, there are so many details that this will be more of a map of the landscape so you get an idea of the breadth of our technology. Each item could have...
Aug 14, 2018
I worked at HP in Ft. Collins, Colorado back in the 1970s. It was a heady experience. We were designing and building early, pre-PC desktop computers and we owned the market back then. The division I worked for eventually migrated to 32-bit workstations, chased from the deskto...
Aug 14, 2018
Introducing the culmination of months of handwork and collaboration. The Hitchhikers Guide to PCB Design is a play off the original Douglas Adams novel and contains over 100 pages of contains......
Aug 9, 2018
In July we rolled out several new content updates to the website, as well as a brand new streamlined checkout experience. We also made some updates to the recently released FSE locator tool to make it far easier to find your local Samtec FSE. Here are the major web updates fo...
Jul 30, 2018
As discussed in part 1 of this blog post, each instance of an Achronix Speedcore eFPGA in your ASIC or SoC design must be configured after the system powers up because Speedcore eFPGAs employ nonvolatile SRAM technology to store its configuration bits. The time required to pr...