feature article
Subscribe Now

Faster than Reality

It better be fast.

Whatever it is, whatever it does, it’s all good as long as it’s fast.

We live for speed in our supercharged world. After all, we’ve gone from a society that used to survive on one breadwinner per family to a society with two breadwinners as the norm to the point where some people have to have multiple jobs just so they don’t fall behind. (Well, in the US, anyway.) So we’re busy. Very busy. And we have to toss Facebook updates and tweets in on top of that.

So we have to be able to do things fast.

And your boss promised something impossible to his boss who promised something even less possible to his boss and so on up to the CEO who promised something ridiculous to the Board so that the share price could hopefully go way up for at least a few days and make them a boatload of money. So it’s your responsibility to figure out how to make the impossible, nay, the ridiculous, happen. Now. You’re going to be a busy dude and it’s your fault if it doesn’t happen on time.

So you’d better be able to do it fast.

Yeah, I know, power consumption matters more than ever these days, and the battery discharging is the one thing that shouldn’t happen fast, but if the gadget itself isn’t fast enough, I won’t wait around to see how long the battery lasts.

So we design things to be fast. As fast as is possible, given all the other constraints (one of them hopefully being basic reality).

Of course, these fast things are complex. Inordinately complex, and getting complexer by the day. And they’re expensive to build. So we have to have ways of testing them before we build them. Because there’s lots to test, and we can’t afford multiple real prototypes.

So we resort to the virtual world for our testing. Which means tradeoffs. We know that we’re going to take a performance hit while we test; the whole point of hardware is that it’s fast. Until we get what we want into hardware, we know it’s going to be slow.

So if we want to do something truly complex, like booting Linux, in a simulation environment, then, depending on our level of abstraction, we can either lean back, feet on the table for a while, or we can go tour the Caribbean and come back hoping that the boot was successful.

We can get closer to hardware speed by using hardware – just not the real hardware. Simulation acceleration and emulation take the slow stuff out of the software world and make it faster. But it’s still only a model of what we’re trying to do, so, while it’s faster, it’s still not fast.

One of the more recent arrivals in the chip world is the dominance of software in defining what happens in the chip. So not only do we have to simulate what’s happening in the hardware, we must also figure out how the software is going to work without having an actual system to run it on.

Software simulation is not really new; instruction-set simulators (ISSs) have been around forever. But we’ve gone from cross-development between platforms to building software for traditional (meaning PC-board-based) embedded systems to development of software for single-chip embedded systems.

The costs of getting it wrong when developing across desktop platforms are in time and annoyance. Similarly with traditional embedded systems; you might have to do a PC-board spin, but, while not desirable, it’s not a deal-killer.

Not so with ICs. Granted, software can be changed without requiring a mask spin, but you damn well better be sure that the hardware interaction has been thoroughly vetted so that a patch will suffice.

And, since it is possible to change software functionality without a new mask, then let’s put as much functionality into software as possible. As long as it’s fast.

So now we need to develop more and more software, and we need to be able to test it out ahead of time, before the first (and last, right?) silicon comes out. So we can use virtual platforms to simulate the computing environment, or, presumably, we can go to emulation if we want more speed.

And we assume that, as in all simulations, we’ll sit around and wait for the software to execute, since, of course, we need to compromise on speed for the sake of getting early simulation access.

Or do we?

Maybe I’ve been asleep for a while as the world passed me by, but something slapped me upside the head a couple weeks ago at DAC when talking with Imperas. They have just announced that their software simulation speed has improved by 50%. Now… that’s a pretty good speedup by typical measures, but, then again, it’s yet another press release with yet another performance improvement. One of a dozen such releases that get issued on any given month. A good thing, to be sure, but, unless it affects you specifically, it’s something of a yawner.

Until you realized one thing: the simulator is running faster than the actual system will run.

Maybe much faster. They’re claiming that their OVPsim provides ISS speeds of 2 GIPS.

Perhaps this transition happened a long time ago and I’m just figuring this out, but, I don’t know, having your simulator run faster than the actual system just doesn’t feel right. Hell, don’t ship the system, just ship the simulator; it’ll work faster than the actual system.

What’s wrong with this picture?

Well, two things. Actually, no, nothing is wrong with the picture, it only feels wrong, but there are two considerations that should make it feel less wrong. Yes, there is some abstraction that happens in an ISS, so that does help some, but not a lot. We’re not talking TLM here; we’re talking a reasonable level of detail.

The real trick comes from the fact that the simulation is happening on a high-power desktop machine with 2+ GHz clock speeds and oodles of memory. The target embedded system typically doesn’t have that.

So, after semi-convincing myself that this is actually the case, that I’m not missing something obvious or being led down the rosy marketing path, a more important question crops up: who cares? So what? Is this just a curiosity? Something you briefly write home to Mom about, but which never shows up in your memoirs or unauthorized biography?

Actually, there is a practical side of this. Absent this speed, software validation gradually moves from a hosted environment to an emulated environment to the silicon.

Now… I would never suggest shipping the product without testing the software on the actual silicon. But, short of that, this suggests that there’s no reason to develop the software on anything but the virtual platform. The architecture guys might use TLM for system modeling, but once you start developing, you can go to the ISS environment and stay there the entire time – or at least until it’s time to test silicon.

And all the time you’re developing, you’ll be running your tests faster than reality.

And that’s fast!

Leave a Reply

Faster than Reality

It better be fast.

Whatever it is, whatever it does, it’s all good as long as it’s fast.

We live for speed in our supercharged world. After all, we’ve gone from a society that used to survive on one breadwinner per family to a society with two breadwinners as the norm to the point where some people have to have multiple jobs just so they don’t fall behind. (Well, in the US, anyway.) So we’re busy. Very busy. And we have to toss Facebook updates and tweets in on top of that.

So we have to be able to do things fast.

And your boss promised something impossible to his boss who promised something even less possible to his boss and so on up to the CEO who promised something ridiculous to the Board so that the share price could hopefully go way up for at least a few days and make them a boatload of money. So it’s your responsibility to figure out how to make the impossible, nay, the ridiculous, happen. Now. You’re going to be a busy dude and it’s your fault if it doesn’t happen on time.

So you’d better be able to do it fast.

Yeah, I know, power consumption matters more than ever these days, and the battery discharging is the one thing that shouldn’t happen fast, but if the gadget itself isn’t fast enough, I won’t wait around to see how long the battery lasts.

So we design things to be fast. As fast as is possible, given all the other constraints (one of them hopefully being basic reality).

Of course, these fast things are complex. Inordinately complex, and getting complexer by the day. And they’re expensive to build. So we have to have ways of testing them before we build them. Because there’s lots to test, and we can’t afford multiple real prototypes.

So we resort to the virtual world for our testing. Which means tradeoffs. We know that we’re going to take a performance hit while we test; the whole point of hardware is that it’s fast. Until we get what we want into hardware, we know it’s going to be slow.

So if we want to do something truly complex, like booting Linux, in a simulation environment, then, depending on our level of abstraction, we can either lean back, feet on the table for a while, or we can go tour the Caribbean and come back hoping that the boot was successful.

We can get closer to hardware speed by using hardware – just not the real hardware. Simulation acceleration and emulation take the slow stuff out of the software world and make it faster. But it’s still only a model of what we’re trying to do, so, while it’s faster, it’s still not fast.

One of the more recent arrivals in the chip world is the dominance of software in defining what happens in the chip. So not only do we have to simulate what’s happening in the hardware, we must also figure out how the software is going to work without having an actual system to run it on.

Software simulation is not really new; instruction-set simulators (ISSs) have been around forever. But we’ve gone from cross-development between platforms to building software for traditional (meaning PC-board-based) embedded systems to development of software for single-chip embedded systems.

The costs of getting it wrong when developing across desktop platforms are in time and annoyance. Similarly with traditional embedded systems; you might have to do a PC-board spin, but, while not desirable, it’s not a deal-killer.

Not so with ICs. Granted, software can be changed without requiring a mask spin, but you damn well better be sure that the hardware interaction has been thoroughly vetted so that a patch will suffice.

And, since it is possible to change software functionality without a new mask, then let’s put as much functionality into software as possible. As long as it’s fast.

So now we need to develop more and more software, and we need to be able to test it out ahead of time, before the first (and last, right?) silicon comes out. So we can use virtual platforms to simulate the computing environment, or, presumably, we can go to emulation if we want more speed.

And we assume that, as in all simulations, we’ll sit around and wait for the software to execute, since, of course, we need to compromise on speed for the sake of getting early simulation access.

Or do we?

Maybe I’ve been asleep for a while as the world passed me by, but something slapped me upside the head a couple weeks ago at DAC when talking with Imperas. They have just announced that their software simulation speed has improved by 50%. Now… that’s a pretty good speedup by typical measures, but, then again, it’s yet another press release with yet another performance improvement. One of a dozen such releases that get issued on any given month. A good thing, to be sure, but, unless it affects you specifically, it’s something of a yawner.

Until you realized one thing: the simulator is running faster than the actual system will run.

Maybe much faster. They’re claiming that their OVPsim provides ISS speeds of 2 GIPS.

Perhaps this transition happened a long time ago and I’m just figuring this out, but, I don’t know, having your simulator run faster than the actual system just doesn’t feel right. Hell, don’t ship the system, just ship the simulator; it’ll work faster than the actual system.

What’s wrong with this picture?

Well, two things. Actually, no, nothing is wrong with the picture, it only feels wrong, but there are two considerations that should make it feel less wrong. Yes, there is some abstraction that happens in an ISS, so that does help some, but not a lot. We’re not talking TLM here; we’re talking a reasonable level of detail.

The real trick comes from the fact that the simulation is happening on a high-power desktop machine with 2+ GHz clock speeds and oodles of memory. The target embedded system typically doesn’t have that.

So, after semi-convincing myself that this is actually the case, that I’m not missing something obvious or being led down the rosy marketing path, a more important question crops up: who cares? So what? Is this just a curiosity? Something you briefly write home to Mom about, but which never shows up in your memoirs or unauthorized biography?

Actually, there is a practical side of this. Absent this speed, software validation gradually moves from a hosted environment to an emulated environment to the silicon.

Now… I would never suggest shipping the product without testing the software on the actual silicon. But, short of that, this suggests that there’s no reason to develop the software on anything but the virtual platform. The architecture guys might use TLM for system modeling, but once you start developing, you can go to the ISS environment and stay there the entire time – or at least until it’s time to test silicon.

And all the time you’re developing, you’ll be running your tests faster than reality.

And that’s fast!

Leave a Reply

featured blogs
May 21, 2022
May is Asian American and Pacific Islander (AAPI) Heritage Month. We would like to spotlight some of our incredible AAPI-identifying employees to celebrate. We recognize the important influence that... ...
May 20, 2022
I'm very happy with my new OMTech 40W CO2 laser engraver/cutter, but only because the folks from Makers Local 256 helped me get it up and running....
May 19, 2022
Learn about the AI chip design breakthroughs and case studies discussed at SNUG Silicon Valley 2022, including autonomous PPA optimization using DSO.ai. The post Key Highlights from SNUG 2022: AI Is Fast Forwarding Chip Design appeared first on From Silicon To Software....
May 12, 2022
By Shelly Stalnaker Every year, the editors of Elektronik in Germany compile a list of the most interesting and innovative… ...

featured video

Building safer robots with computer vision & AI

Sponsored by Texas Instruments

Watch TI's demo to see how Jacinto™ 7 processors fuse deep learning and traditional computer vision to enable safer autonomous mobile robots.

Watch demo

featured paper

Reduce EV cost and improve drive range by integrating powertrain systems

Sponsored by Texas Instruments

When you can create automotive applications that do more with fewer parts, you’ll reduce both weight and cost and improve reliability. That’s the idea behind integrating electric vehicle (EV) and hybrid electric vehicle (HEV) designs.

Click to read more

featured chalk talk

Industrial CbM Solutions from Sensing to Actionable Insight

Sponsored by Mouser Electronics and Analog Devices

Condition based monitoring (CBM) has been a valuable tool for industrial applications for years but until now, the adoption of this kind of technology has not been very widespread. In this episode of Chalk Talk, Amelia Dalton chats with Maurice O’Brien from Analog Devices about how CBM can now be utilized across a wider variety of industrial applications and how Analog Device’s portfolio of CBM solutions can help you avoid unplanned downtime in your next industrial design.

Click here for more information about Analog Devices Inc. Condition-Based Monitoring (CBM)