feature article
Subscribe Now

Faster than Reality

It better be fast.

Whatever it is, whatever it does, it’s all good as long as it’s fast.

We live for speed in our supercharged world. After all, we’ve gone from a society that used to survive on one breadwinner per family to a society with two breadwinners as the norm to the point where some people have to have multiple jobs just so they don’t fall behind. (Well, in the US, anyway.) So we’re busy. Very busy. And we have to toss Facebook updates and tweets in on top of that.

So we have to be able to do things fast.

And your boss promised something impossible to his boss who promised something even less possible to his boss and so on up to the CEO who promised something ridiculous to the Board so that the share price could hopefully go way up for at least a few days and make them a boatload of money. So it’s your responsibility to figure out how to make the impossible, nay, the ridiculous, happen. Now. You’re going to be a busy dude and it’s your fault if it doesn’t happen on time.

So you’d better be able to do it fast.

Yeah, I know, power consumption matters more than ever these days, and the battery discharging is the one thing that shouldn’t happen fast, but if the gadget itself isn’t fast enough, I won’t wait around to see how long the battery lasts.

So we design things to be fast. As fast as is possible, given all the other constraints (one of them hopefully being basic reality).

Of course, these fast things are complex. Inordinately complex, and getting complexer by the day. And they’re expensive to build. So we have to have ways of testing them before we build them. Because there’s lots to test, and we can’t afford multiple real prototypes.

So we resort to the virtual world for our testing. Which means tradeoffs. We know that we’re going to take a performance hit while we test; the whole point of hardware is that it’s fast. Until we get what we want into hardware, we know it’s going to be slow.

So if we want to do something truly complex, like booting Linux, in a simulation environment, then, depending on our level of abstraction, we can either lean back, feet on the table for a while, or we can go tour the Caribbean and come back hoping that the boot was successful.

We can get closer to hardware speed by using hardware – just not the real hardware. Simulation acceleration and emulation take the slow stuff out of the software world and make it faster. But it’s still only a model of what we’re trying to do, so, while it’s faster, it’s still not fast.

One of the more recent arrivals in the chip world is the dominance of software in defining what happens in the chip. So not only do we have to simulate what’s happening in the hardware, we must also figure out how the software is going to work without having an actual system to run it on.

Software simulation is not really new; instruction-set simulators (ISSs) have been around forever. But we’ve gone from cross-development between platforms to building software for traditional (meaning PC-board-based) embedded systems to development of software for single-chip embedded systems.

The costs of getting it wrong when developing across desktop platforms are in time and annoyance. Similarly with traditional embedded systems; you might have to do a PC-board spin, but, while not desirable, it’s not a deal-killer.

Not so with ICs. Granted, software can be changed without requiring a mask spin, but you damn well better be sure that the hardware interaction has been thoroughly vetted so that a patch will suffice.

And, since it is possible to change software functionality without a new mask, then let’s put as much functionality into software as possible. As long as it’s fast.

So now we need to develop more and more software, and we need to be able to test it out ahead of time, before the first (and last, right?) silicon comes out. So we can use virtual platforms to simulate the computing environment, or, presumably, we can go to emulation if we want more speed.

And we assume that, as in all simulations, we’ll sit around and wait for the software to execute, since, of course, we need to compromise on speed for the sake of getting early simulation access.

Or do we?

Maybe I’ve been asleep for a while as the world passed me by, but something slapped me upside the head a couple weeks ago at DAC when talking with Imperas. They have just announced that their software simulation speed has improved by 50%. Now… that’s a pretty good speedup by typical measures, but, then again, it’s yet another press release with yet another performance improvement. One of a dozen such releases that get issued on any given month. A good thing, to be sure, but, unless it affects you specifically, it’s something of a yawner.

Until you realized one thing: the simulator is running faster than the actual system will run.

Maybe much faster. They’re claiming that their OVPsim provides ISS speeds of 2 GIPS.

Perhaps this transition happened a long time ago and I’m just figuring this out, but, I don’t know, having your simulator run faster than the actual system just doesn’t feel right. Hell, don’t ship the system, just ship the simulator; it’ll work faster than the actual system.

What’s wrong with this picture?

Well, two things. Actually, no, nothing is wrong with the picture, it only feels wrong, but there are two considerations that should make it feel less wrong. Yes, there is some abstraction that happens in an ISS, so that does help some, but not a lot. We’re not talking TLM here; we’re talking a reasonable level of detail.

The real trick comes from the fact that the simulation is happening on a high-power desktop machine with 2+ GHz clock speeds and oodles of memory. The target embedded system typically doesn’t have that.

So, after semi-convincing myself that this is actually the case, that I’m not missing something obvious or being led down the rosy marketing path, a more important question crops up: who cares? So what? Is this just a curiosity? Something you briefly write home to Mom about, but which never shows up in your memoirs or unauthorized biography?

Actually, there is a practical side of this. Absent this speed, software validation gradually moves from a hosted environment to an emulated environment to the silicon.

Now… I would never suggest shipping the product without testing the software on the actual silicon. But, short of that, this suggests that there’s no reason to develop the software on anything but the virtual platform. The architecture guys might use TLM for system modeling, but once you start developing, you can go to the ISS environment and stay there the entire time – or at least until it’s time to test silicon.

And all the time you’re developing, you’ll be running your tests faster than reality.

And that’s fast!

Leave a Reply

Faster than Reality

It better be fast.

Whatever it is, whatever it does, it’s all good as long as it’s fast.

We live for speed in our supercharged world. After all, we’ve gone from a society that used to survive on one breadwinner per family to a society with two breadwinners as the norm to the point where some people have to have multiple jobs just so they don’t fall behind. (Well, in the US, anyway.) So we’re busy. Very busy. And we have to toss Facebook updates and tweets in on top of that.

So we have to be able to do things fast.

And your boss promised something impossible to his boss who promised something even less possible to his boss and so on up to the CEO who promised something ridiculous to the Board so that the share price could hopefully go way up for at least a few days and make them a boatload of money. So it’s your responsibility to figure out how to make the impossible, nay, the ridiculous, happen. Now. You’re going to be a busy dude and it’s your fault if it doesn’t happen on time.

So you’d better be able to do it fast.

Yeah, I know, power consumption matters more than ever these days, and the battery discharging is the one thing that shouldn’t happen fast, but if the gadget itself isn’t fast enough, I won’t wait around to see how long the battery lasts.

So we design things to be fast. As fast as is possible, given all the other constraints (one of them hopefully being basic reality).

Of course, these fast things are complex. Inordinately complex, and getting complexer by the day. And they’re expensive to build. So we have to have ways of testing them before we build them. Because there’s lots to test, and we can’t afford multiple real prototypes.

So we resort to the virtual world for our testing. Which means tradeoffs. We know that we’re going to take a performance hit while we test; the whole point of hardware is that it’s fast. Until we get what we want into hardware, we know it’s going to be slow.

So if we want to do something truly complex, like booting Linux, in a simulation environment, then, depending on our level of abstraction, we can either lean back, feet on the table for a while, or we can go tour the Caribbean and come back hoping that the boot was successful.

We can get closer to hardware speed by using hardware – just not the real hardware. Simulation acceleration and emulation take the slow stuff out of the software world and make it faster. But it’s still only a model of what we’re trying to do, so, while it’s faster, it’s still not fast.

One of the more recent arrivals in the chip world is the dominance of software in defining what happens in the chip. So not only do we have to simulate what’s happening in the hardware, we must also figure out how the software is going to work without having an actual system to run it on.

Software simulation is not really new; instruction-set simulators (ISSs) have been around forever. But we’ve gone from cross-development between platforms to building software for traditional (meaning PC-board-based) embedded systems to development of software for single-chip embedded systems.

The costs of getting it wrong when developing across desktop platforms are in time and annoyance. Similarly with traditional embedded systems; you might have to do a PC-board spin, but, while not desirable, it’s not a deal-killer.

Not so with ICs. Granted, software can be changed without requiring a mask spin, but you damn well better be sure that the hardware interaction has been thoroughly vetted so that a patch will suffice.

And, since it is possible to change software functionality without a new mask, then let’s put as much functionality into software as possible. As long as it’s fast.

So now we need to develop more and more software, and we need to be able to test it out ahead of time, before the first (and last, right?) silicon comes out. So we can use virtual platforms to simulate the computing environment, or, presumably, we can go to emulation if we want more speed.

And we assume that, as in all simulations, we’ll sit around and wait for the software to execute, since, of course, we need to compromise on speed for the sake of getting early simulation access.

Or do we?

Maybe I’ve been asleep for a while as the world passed me by, but something slapped me upside the head a couple weeks ago at DAC when talking with Imperas. They have just announced that their software simulation speed has improved by 50%. Now… that’s a pretty good speedup by typical measures, but, then again, it’s yet another press release with yet another performance improvement. One of a dozen such releases that get issued on any given month. A good thing, to be sure, but, unless it affects you specifically, it’s something of a yawner.

Until you realized one thing: the simulator is running faster than the actual system will run.

Maybe much faster. They’re claiming that their OVPsim provides ISS speeds of 2 GIPS.

Perhaps this transition happened a long time ago and I’m just figuring this out, but, I don’t know, having your simulator run faster than the actual system just doesn’t feel right. Hell, don’t ship the system, just ship the simulator; it’ll work faster than the actual system.

What’s wrong with this picture?

Well, two things. Actually, no, nothing is wrong with the picture, it only feels wrong, but there are two considerations that should make it feel less wrong. Yes, there is some abstraction that happens in an ISS, so that does help some, but not a lot. We’re not talking TLM here; we’re talking a reasonable level of detail.

The real trick comes from the fact that the simulation is happening on a high-power desktop machine with 2+ GHz clock speeds and oodles of memory. The target embedded system typically doesn’t have that.

So, after semi-convincing myself that this is actually the case, that I’m not missing something obvious or being led down the rosy marketing path, a more important question crops up: who cares? So what? Is this just a curiosity? Something you briefly write home to Mom about, but which never shows up in your memoirs or unauthorized biography?

Actually, there is a practical side of this. Absent this speed, software validation gradually moves from a hosted environment to an emulated environment to the silicon.

Now… I would never suggest shipping the product without testing the software on the actual silicon. But, short of that, this suggests that there’s no reason to develop the software on anything but the virtual platform. The architecture guys might use TLM for system modeling, but once you start developing, you can go to the ISS environment and stay there the entire time – or at least until it’s time to test silicon.

And all the time you’re developing, you’ll be running your tests faster than reality.

And that’s fast!

Leave a Reply

featured blogs
Oct 22, 2020
WARNING: If you read this blog and visit the featured site, Max'€™s Cool Beans will accept no responsibility for the countless hours you may fritter away....
Oct 22, 2020
Cadence ® Spectre ® AMS Designer is a high-performance mixed-signal simulation system. The ability to use multiple engines and drive from a variety of platforms enables you to "rev... [[ Click on the title to access the full blog on the Cadence Community site....
Oct 20, 2020
In 2020, mobile traffic has skyrocketed everywhere as our planet battles a pandemic. Samtec.com saw nearly double the mobile traffic in the first two quarters than it normally sees. While these levels have dropped off from their peaks in the spring, they have not returned to ...
Oct 16, 2020
[From the last episode: We put together many of the ideas we'€™ve been describing to show the basics of how in-memory compute works.] I'€™m going to take a sec for some commentary before we continue with the last few steps of in-memory compute. The whole point of this web...

featured video

Better PPA with Innovus Mixed Placer Technology – Gigaplace XL

Sponsored by Cadence Design Systems

With the increase of on-chip storage elements, it has become extremely time consuming to come up with an optimized floorplan with manual methods. Innovus Implementation’s advanced multi-objective placement technology, GigaPlace XL, provides automation to optimize at scale, concurrent placement of macros, and standard cells for multiple objectives like timing, wirelength, congestion, and power. This technology provides an innovative way to address design productivity along with design quality improvements reducing weeks of manual floorplan time down to a few hours.

Click here for more information about Innovus Implementation System

featured paper

Fundamentals of Precision ADC Noise Analysis

Sponsored by Texas Instruments

Build your knowledge of noise performance with high-resolution delta-sigma ADCs. This e-book covers types of ADC noise, how other components contribute noise to the system, and how these noise sources interact with each other.

Click here to download the whitepaper

Featured Chalk Talk

Consumer Plus 3D NAND SD Cards

Sponsored by Panasonic

3D NAND has numerous advantages, like larger capacity, lower cost, and longer lifespan. In many systems, 3D NAND in SD card form is a smart move. In this episode of Chalk Talk, Amelia Dalton chats with Brian Donovan about SD 3D NAND in applications such as automotive.

Click here for more information about Panasonic Consumer Plus Grade 3D NAND SD Cards