feature article
Subscribe Now

Faster than Reality

It better be fast.

Whatever it is, whatever it does, it’s all good as long as it’s fast.

We live for speed in our supercharged world. After all, we’ve gone from a society that used to survive on one breadwinner per family to a society with two breadwinners as the norm to the point where some people have to have multiple jobs just so they don’t fall behind. (Well, in the US, anyway.) So we’re busy. Very busy. And we have to toss Facebook updates and tweets in on top of that.

So we have to be able to do things fast.

And your boss promised something impossible to his boss who promised something even less possible to his boss and so on up to the CEO who promised something ridiculous to the Board so that the share price could hopefully go way up for at least a few days and make them a boatload of money. So it’s your responsibility to figure out how to make the impossible, nay, the ridiculous, happen. Now. You’re going to be a busy dude and it’s your fault if it doesn’t happen on time.

So you’d better be able to do it fast.

Yeah, I know, power consumption matters more than ever these days, and the battery discharging is the one thing that shouldn’t happen fast, but if the gadget itself isn’t fast enough, I won’t wait around to see how long the battery lasts.

So we design things to be fast. As fast as is possible, given all the other constraints (one of them hopefully being basic reality).

Of course, these fast things are complex. Inordinately complex, and getting complexer by the day. And they’re expensive to build. So we have to have ways of testing them before we build them. Because there’s lots to test, and we can’t afford multiple real prototypes.

So we resort to the virtual world for our testing. Which means tradeoffs. We know that we’re going to take a performance hit while we test; the whole point of hardware is that it’s fast. Until we get what we want into hardware, we know it’s going to be slow.

So if we want to do something truly complex, like booting Linux, in a simulation environment, then, depending on our level of abstraction, we can either lean back, feet on the table for a while, or we can go tour the Caribbean and come back hoping that the boot was successful.

We can get closer to hardware speed by using hardware – just not the real hardware. Simulation acceleration and emulation take the slow stuff out of the software world and make it faster. But it’s still only a model of what we’re trying to do, so, while it’s faster, it’s still not fast.

One of the more recent arrivals in the chip world is the dominance of software in defining what happens in the chip. So not only do we have to simulate what’s happening in the hardware, we must also figure out how the software is going to work without having an actual system to run it on.

Software simulation is not really new; instruction-set simulators (ISSs) have been around forever. But we’ve gone from cross-development between platforms to building software for traditional (meaning PC-board-based) embedded systems to development of software for single-chip embedded systems.

The costs of getting it wrong when developing across desktop platforms are in time and annoyance. Similarly with traditional embedded systems; you might have to do a PC-board spin, but, while not desirable, it’s not a deal-killer.

Not so with ICs. Granted, software can be changed without requiring a mask spin, but you damn well better be sure that the hardware interaction has been thoroughly vetted so that a patch will suffice.

And, since it is possible to change software functionality without a new mask, then let’s put as much functionality into software as possible. As long as it’s fast.

So now we need to develop more and more software, and we need to be able to test it out ahead of time, before the first (and last, right?) silicon comes out. So we can use virtual platforms to simulate the computing environment, or, presumably, we can go to emulation if we want more speed.

And we assume that, as in all simulations, we’ll sit around and wait for the software to execute, since, of course, we need to compromise on speed for the sake of getting early simulation access.

Or do we?

Maybe I’ve been asleep for a while as the world passed me by, but something slapped me upside the head a couple weeks ago at DAC when talking with Imperas. They have just announced that their software simulation speed has improved by 50%. Now… that’s a pretty good speedup by typical measures, but, then again, it’s yet another press release with yet another performance improvement. One of a dozen such releases that get issued on any given month. A good thing, to be sure, but, unless it affects you specifically, it’s something of a yawner.

Until you realized one thing: the simulator is running faster than the actual system will run.

Maybe much faster. They’re claiming that their OVPsim provides ISS speeds of 2 GIPS.

Perhaps this transition happened a long time ago and I’m just figuring this out, but, I don’t know, having your simulator run faster than the actual system just doesn’t feel right. Hell, don’t ship the system, just ship the simulator; it’ll work faster than the actual system.

What’s wrong with this picture?

Well, two things. Actually, no, nothing is wrong with the picture, it only feels wrong, but there are two considerations that should make it feel less wrong. Yes, there is some abstraction that happens in an ISS, so that does help some, but not a lot. We’re not talking TLM here; we’re talking a reasonable level of detail.

The real trick comes from the fact that the simulation is happening on a high-power desktop machine with 2+ GHz clock speeds and oodles of memory. The target embedded system typically doesn’t have that.

So, after semi-convincing myself that this is actually the case, that I’m not missing something obvious or being led down the rosy marketing path, a more important question crops up: who cares? So what? Is this just a curiosity? Something you briefly write home to Mom about, but which never shows up in your memoirs or unauthorized biography?

Actually, there is a practical side of this. Absent this speed, software validation gradually moves from a hosted environment to an emulated environment to the silicon.

Now… I would never suggest shipping the product without testing the software on the actual silicon. But, short of that, this suggests that there’s no reason to develop the software on anything but the virtual platform. The architecture guys might use TLM for system modeling, but once you start developing, you can go to the ISS environment and stay there the entire time – or at least until it’s time to test silicon.

And all the time you’re developing, you’ll be running your tests faster than reality.

And that’s fast!

Leave a Reply

Faster than Reality

It better be fast.

Whatever it is, whatever it does, it’s all good as long as it’s fast.

We live for speed in our supercharged world. After all, we’ve gone from a society that used to survive on one breadwinner per family to a society with two breadwinners as the norm to the point where some people have to have multiple jobs just so they don’t fall behind. (Well, in the US, anyway.) So we’re busy. Very busy. And we have to toss Facebook updates and tweets in on top of that.

So we have to be able to do things fast.

And your boss promised something impossible to his boss who promised something even less possible to his boss and so on up to the CEO who promised something ridiculous to the Board so that the share price could hopefully go way up for at least a few days and make them a boatload of money. So it’s your responsibility to figure out how to make the impossible, nay, the ridiculous, happen. Now. You’re going to be a busy dude and it’s your fault if it doesn’t happen on time.

So you’d better be able to do it fast.

Yeah, I know, power consumption matters more than ever these days, and the battery discharging is the one thing that shouldn’t happen fast, but if the gadget itself isn’t fast enough, I won’t wait around to see how long the battery lasts.

So we design things to be fast. As fast as is possible, given all the other constraints (one of them hopefully being basic reality).

Of course, these fast things are complex. Inordinately complex, and getting complexer by the day. And they’re expensive to build. So we have to have ways of testing them before we build them. Because there’s lots to test, and we can’t afford multiple real prototypes.

So we resort to the virtual world for our testing. Which means tradeoffs. We know that we’re going to take a performance hit while we test; the whole point of hardware is that it’s fast. Until we get what we want into hardware, we know it’s going to be slow.

So if we want to do something truly complex, like booting Linux, in a simulation environment, then, depending on our level of abstraction, we can either lean back, feet on the table for a while, or we can go tour the Caribbean and come back hoping that the boot was successful.

We can get closer to hardware speed by using hardware – just not the real hardware. Simulation acceleration and emulation take the slow stuff out of the software world and make it faster. But it’s still only a model of what we’re trying to do, so, while it’s faster, it’s still not fast.

One of the more recent arrivals in the chip world is the dominance of software in defining what happens in the chip. So not only do we have to simulate what’s happening in the hardware, we must also figure out how the software is going to work without having an actual system to run it on.

Software simulation is not really new; instruction-set simulators (ISSs) have been around forever. But we’ve gone from cross-development between platforms to building software for traditional (meaning PC-board-based) embedded systems to development of software for single-chip embedded systems.

The costs of getting it wrong when developing across desktop platforms are in time and annoyance. Similarly with traditional embedded systems; you might have to do a PC-board spin, but, while not desirable, it’s not a deal-killer.

Not so with ICs. Granted, software can be changed without requiring a mask spin, but you damn well better be sure that the hardware interaction has been thoroughly vetted so that a patch will suffice.

And, since it is possible to change software functionality without a new mask, then let’s put as much functionality into software as possible. As long as it’s fast.

So now we need to develop more and more software, and we need to be able to test it out ahead of time, before the first (and last, right?) silicon comes out. So we can use virtual platforms to simulate the computing environment, or, presumably, we can go to emulation if we want more speed.

And we assume that, as in all simulations, we’ll sit around and wait for the software to execute, since, of course, we need to compromise on speed for the sake of getting early simulation access.

Or do we?

Maybe I’ve been asleep for a while as the world passed me by, but something slapped me upside the head a couple weeks ago at DAC when talking with Imperas. They have just announced that their software simulation speed has improved by 50%. Now… that’s a pretty good speedup by typical measures, but, then again, it’s yet another press release with yet another performance improvement. One of a dozen such releases that get issued on any given month. A good thing, to be sure, but, unless it affects you specifically, it’s something of a yawner.

Until you realized one thing: the simulator is running faster than the actual system will run.

Maybe much faster. They’re claiming that their OVPsim provides ISS speeds of 2 GIPS.

Perhaps this transition happened a long time ago and I’m just figuring this out, but, I don’t know, having your simulator run faster than the actual system just doesn’t feel right. Hell, don’t ship the system, just ship the simulator; it’ll work faster than the actual system.

What’s wrong with this picture?

Well, two things. Actually, no, nothing is wrong with the picture, it only feels wrong, but there are two considerations that should make it feel less wrong. Yes, there is some abstraction that happens in an ISS, so that does help some, but not a lot. We’re not talking TLM here; we’re talking a reasonable level of detail.

The real trick comes from the fact that the simulation is happening on a high-power desktop machine with 2+ GHz clock speeds and oodles of memory. The target embedded system typically doesn’t have that.

So, after semi-convincing myself that this is actually the case, that I’m not missing something obvious or being led down the rosy marketing path, a more important question crops up: who cares? So what? Is this just a curiosity? Something you briefly write home to Mom about, but which never shows up in your memoirs or unauthorized biography?

Actually, there is a practical side of this. Absent this speed, software validation gradually moves from a hosted environment to an emulated environment to the silicon.

Now… I would never suggest shipping the product without testing the software on the actual silicon. But, short of that, this suggests that there’s no reason to develop the software on anything but the virtual platform. The architecture guys might use TLM for system modeling, but once you start developing, you can go to the ISS environment and stay there the entire time – or at least until it’s time to test silicon.

And all the time you’re developing, you’ll be running your tests faster than reality.

And that’s fast!

Leave a Reply

featured blogs
Aug 3, 2021
I just discovered that Norland Nannies -- who can command a salary of $170,000 on a bad day -- are trained in self-defense and defensive driving....
Aug 3, 2021
Picking up from where we left off in the previous post , let's look at some more new and interesting changes made in Hotfix 019. As you might already know, Allegro ® System Capture is available... [[ Click on the title to access the full blog on the Cadence Community si...
Jul 30, 2021
You can't attack what you can't see, and cloaking technology for devices on Ethernet LANs is merely one of many protection layers implemented in Q-Net Security's Q-Box to protect networked devices and transaction between these devices from cyberattacks. Other security technol...
Jul 29, 2021
Learn why SoC emulation is the next frontier for power system optimization, helping chip designers shift power verification left in the SoC design flow. The post Why Wait Days for Results? The Next Frontier for Power Verification appeared first on From Silicon To Software....

featured video

Vibrant Super Resolution (SR-GAN) with DesignWare ARC EV Processor IP

Sponsored by Synopsys

Super resolution constructs high-res images from low-res. Neural networks like SR-GAN can generate missing data to achieve impressive results. This demo shows SR-GAN running on ARC EV processor IP from Synopsys to generate beautiful images.

Click here for more information about DesignWare ARC EV Processors for Embedded Vision

featured paper

PrimeLib Next-Gen Library Characterization - Providing Accelerated Access to Advanced Process Nodes

Sponsored by Synopsys

What’s driving the need for a best-in-class solution for library characterization? In the latest Synopsys Designer’s Digest, learn about various SoC design challenges, requirements, and innovative technologies that deliver faster time-to-market with golden signoff quality. Learn how Synopsys’ PrimeLib™ solution addresses the increase in complexity and accuracy needs for advanced nodes and provides designers and foundries accelerated turn-around time and compute resource optimization.

Click to read the latest issue of Designer's Digest

featured chalk talk

RF Interconnect for 12G-SDI Broadcast Applications

Sponsored by Mouser Electronics and Amphenol RF

Today’s 4K and emerging 8K video standards require an enormous amount of bandwidth. And, with all that bandwidth, there are new demands on our interconnects. In this episode of Chalk Talk, Amelia Dalton chats with Mike Comer and Ron Orban of Amphenol RF about the evolution of broadcast technology and the latest interconnect solutions that are required to meet these new demands.

Click here for more information about Amphenol RF Adapters & Cable Assemblies for Broadcast