feature article
Subscribe Now

Faster than Reality

It better be fast.

Whatever it is, whatever it does, it’s all good as long as it’s fast.

We live for speed in our supercharged world. After all, we’ve gone from a society that used to survive on one breadwinner per family to a society with two breadwinners as the norm to the point where some people have to have multiple jobs just so they don’t fall behind. (Well, in the US, anyway.) So we’re busy. Very busy. And we have to toss Facebook updates and tweets in on top of that.

So we have to be able to do things fast.

And your boss promised something impossible to his boss who promised something even less possible to his boss and so on up to the CEO who promised something ridiculous to the Board so that the share price could hopefully go way up for at least a few days and make them a boatload of money. So it’s your responsibility to figure out how to make the impossible, nay, the ridiculous, happen. Now. You’re going to be a busy dude and it’s your fault if it doesn’t happen on time.

So you’d better be able to do it fast.

Yeah, I know, power consumption matters more than ever these days, and the battery discharging is the one thing that shouldn’t happen fast, but if the gadget itself isn’t fast enough, I won’t wait around to see how long the battery lasts.

So we design things to be fast. As fast as is possible, given all the other constraints (one of them hopefully being basic reality).

Of course, these fast things are complex. Inordinately complex, and getting complexer by the day. And they’re expensive to build. So we have to have ways of testing them before we build them. Because there’s lots to test, and we can’t afford multiple real prototypes.

So we resort to the virtual world for our testing. Which means tradeoffs. We know that we’re going to take a performance hit while we test; the whole point of hardware is that it’s fast. Until we get what we want into hardware, we know it’s going to be slow.

So if we want to do something truly complex, like booting Linux, in a simulation environment, then, depending on our level of abstraction, we can either lean back, feet on the table for a while, or we can go tour the Caribbean and come back hoping that the boot was successful.

We can get closer to hardware speed by using hardware – just not the real hardware. Simulation acceleration and emulation take the slow stuff out of the software world and make it faster. But it’s still only a model of what we’re trying to do, so, while it’s faster, it’s still not fast.

One of the more recent arrivals in the chip world is the dominance of software in defining what happens in the chip. So not only do we have to simulate what’s happening in the hardware, we must also figure out how the software is going to work without having an actual system to run it on.

Software simulation is not really new; instruction-set simulators (ISSs) have been around forever. But we’ve gone from cross-development between platforms to building software for traditional (meaning PC-board-based) embedded systems to development of software for single-chip embedded systems.

The costs of getting it wrong when developing across desktop platforms are in time and annoyance. Similarly with traditional embedded systems; you might have to do a PC-board spin, but, while not desirable, it’s not a deal-killer.

Not so with ICs. Granted, software can be changed without requiring a mask spin, but you damn well better be sure that the hardware interaction has been thoroughly vetted so that a patch will suffice.

And, since it is possible to change software functionality without a new mask, then let’s put as much functionality into software as possible. As long as it’s fast.

So now we need to develop more and more software, and we need to be able to test it out ahead of time, before the first (and last, right?) silicon comes out. So we can use virtual platforms to simulate the computing environment, or, presumably, we can go to emulation if we want more speed.

And we assume that, as in all simulations, we’ll sit around and wait for the software to execute, since, of course, we need to compromise on speed for the sake of getting early simulation access.

Or do we?

Maybe I’ve been asleep for a while as the world passed me by, but something slapped me upside the head a couple weeks ago at DAC when talking with Imperas. They have just announced that their software simulation speed has improved by 50%. Now… that’s a pretty good speedup by typical measures, but, then again, it’s yet another press release with yet another performance improvement. One of a dozen such releases that get issued on any given month. A good thing, to be sure, but, unless it affects you specifically, it’s something of a yawner.

Until you realized one thing: the simulator is running faster than the actual system will run.

Maybe much faster. They’re claiming that their OVPsim provides ISS speeds of 2 GIPS.

Perhaps this transition happened a long time ago and I’m just figuring this out, but, I don’t know, having your simulator run faster than the actual system just doesn’t feel right. Hell, don’t ship the system, just ship the simulator; it’ll work faster than the actual system.

What’s wrong with this picture?

Well, two things. Actually, no, nothing is wrong with the picture, it only feels wrong, but there are two considerations that should make it feel less wrong. Yes, there is some abstraction that happens in an ISS, so that does help some, but not a lot. We’re not talking TLM here; we’re talking a reasonable level of detail.

The real trick comes from the fact that the simulation is happening on a high-power desktop machine with 2+ GHz clock speeds and oodles of memory. The target embedded system typically doesn’t have that.

So, after semi-convincing myself that this is actually the case, that I’m not missing something obvious or being led down the rosy marketing path, a more important question crops up: who cares? So what? Is this just a curiosity? Something you briefly write home to Mom about, but which never shows up in your memoirs or unauthorized biography?

Actually, there is a practical side of this. Absent this speed, software validation gradually moves from a hosted environment to an emulated environment to the silicon.

Now… I would never suggest shipping the product without testing the software on the actual silicon. But, short of that, this suggests that there’s no reason to develop the software on anything but the virtual platform. The architecture guys might use TLM for system modeling, but once you start developing, you can go to the ISS environment and stay there the entire time – or at least until it’s time to test silicon.

And all the time you’re developing, you’ll be running your tests faster than reality.

And that’s fast!

Leave a Reply

Faster than Reality

It better be fast.

Whatever it is, whatever it does, it’s all good as long as it’s fast.

We live for speed in our supercharged world. After all, we’ve gone from a society that used to survive on one breadwinner per family to a society with two breadwinners as the norm to the point where some people have to have multiple jobs just so they don’t fall behind. (Well, in the US, anyway.) So we’re busy. Very busy. And we have to toss Facebook updates and tweets in on top of that.

So we have to be able to do things fast.

And your boss promised something impossible to his boss who promised something even less possible to his boss and so on up to the CEO who promised something ridiculous to the Board so that the share price could hopefully go way up for at least a few days and make them a boatload of money. So it’s your responsibility to figure out how to make the impossible, nay, the ridiculous, happen. Now. You’re going to be a busy dude and it’s your fault if it doesn’t happen on time.

So you’d better be able to do it fast.

Yeah, I know, power consumption matters more than ever these days, and the battery discharging is the one thing that shouldn’t happen fast, but if the gadget itself isn’t fast enough, I won’t wait around to see how long the battery lasts.

So we design things to be fast. As fast as is possible, given all the other constraints (one of them hopefully being basic reality).

Of course, these fast things are complex. Inordinately complex, and getting complexer by the day. And they’re expensive to build. So we have to have ways of testing them before we build them. Because there’s lots to test, and we can’t afford multiple real prototypes.

So we resort to the virtual world for our testing. Which means tradeoffs. We know that we’re going to take a performance hit while we test; the whole point of hardware is that it’s fast. Until we get what we want into hardware, we know it’s going to be slow.

So if we want to do something truly complex, like booting Linux, in a simulation environment, then, depending on our level of abstraction, we can either lean back, feet on the table for a while, or we can go tour the Caribbean and come back hoping that the boot was successful.

We can get closer to hardware speed by using hardware – just not the real hardware. Simulation acceleration and emulation take the slow stuff out of the software world and make it faster. But it’s still only a model of what we’re trying to do, so, while it’s faster, it’s still not fast.

One of the more recent arrivals in the chip world is the dominance of software in defining what happens in the chip. So not only do we have to simulate what’s happening in the hardware, we must also figure out how the software is going to work without having an actual system to run it on.

Software simulation is not really new; instruction-set simulators (ISSs) have been around forever. But we’ve gone from cross-development between platforms to building software for traditional (meaning PC-board-based) embedded systems to development of software for single-chip embedded systems.

The costs of getting it wrong when developing across desktop platforms are in time and annoyance. Similarly with traditional embedded systems; you might have to do a PC-board spin, but, while not desirable, it’s not a deal-killer.

Not so with ICs. Granted, software can be changed without requiring a mask spin, but you damn well better be sure that the hardware interaction has been thoroughly vetted so that a patch will suffice.

And, since it is possible to change software functionality without a new mask, then let’s put as much functionality into software as possible. As long as it’s fast.

So now we need to develop more and more software, and we need to be able to test it out ahead of time, before the first (and last, right?) silicon comes out. So we can use virtual platforms to simulate the computing environment, or, presumably, we can go to emulation if we want more speed.

And we assume that, as in all simulations, we’ll sit around and wait for the software to execute, since, of course, we need to compromise on speed for the sake of getting early simulation access.

Or do we?

Maybe I’ve been asleep for a while as the world passed me by, but something slapped me upside the head a couple weeks ago at DAC when talking with Imperas. They have just announced that their software simulation speed has improved by 50%. Now… that’s a pretty good speedup by typical measures, but, then again, it’s yet another press release with yet another performance improvement. One of a dozen such releases that get issued on any given month. A good thing, to be sure, but, unless it affects you specifically, it’s something of a yawner.

Until you realized one thing: the simulator is running faster than the actual system will run.

Maybe much faster. They’re claiming that their OVPsim provides ISS speeds of 2 GIPS.

Perhaps this transition happened a long time ago and I’m just figuring this out, but, I don’t know, having your simulator run faster than the actual system just doesn’t feel right. Hell, don’t ship the system, just ship the simulator; it’ll work faster than the actual system.

What’s wrong with this picture?

Well, two things. Actually, no, nothing is wrong with the picture, it only feels wrong, but there are two considerations that should make it feel less wrong. Yes, there is some abstraction that happens in an ISS, so that does help some, but not a lot. We’re not talking TLM here; we’re talking a reasonable level of detail.

The real trick comes from the fact that the simulation is happening on a high-power desktop machine with 2+ GHz clock speeds and oodles of memory. The target embedded system typically doesn’t have that.

So, after semi-convincing myself that this is actually the case, that I’m not missing something obvious or being led down the rosy marketing path, a more important question crops up: who cares? So what? Is this just a curiosity? Something you briefly write home to Mom about, but which never shows up in your memoirs or unauthorized biography?

Actually, there is a practical side of this. Absent this speed, software validation gradually moves from a hosted environment to an emulated environment to the silicon.

Now… I would never suggest shipping the product without testing the software on the actual silicon. But, short of that, this suggests that there’s no reason to develop the software on anything but the virtual platform. The architecture guys might use TLM for system modeling, but once you start developing, you can go to the ISS environment and stay there the entire time – or at least until it’s time to test silicon.

And all the time you’re developing, you’ll be running your tests faster than reality.

And that’s fast!

Leave a Reply

featured blogs
Apr 25, 2024
Cadence's seven -year partnership with'ĀÆ Team4Tech 'ĀÆhas given our employees unique opportunities to harness the power of technology and engage in a three -month philanthropic project to improve the livelihood of communities in need. In Fall 2023, this partnership allowed C...
Apr 24, 2024
Learn about maskless electron beam lithography and see how Multibeam's industry-first e-beam semiconductor lithography system leverages Synopsys software.The post Synopsys and Multibeam Accelerate Innovation with First Production-Ready E-Beam Lithography System appeared fir...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadenceā€™s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

SLM Silicon.da Introduction
Sponsored by Synopsys
In this episode of Chalk Talk, Amelia Dalton and Guy Cortez from Synopsys investigate how Synopsysā€™ Silicon.da platform can increase engineering productivity and silicon efficiency while providing the tool scalability needed for todayā€™s semiconductor designs. They also walk through the steps involved in a SLM workflow and examine how this open and extensible platform can help you avoid pitfalls in each step of your next IC design.
Dec 6, 2023
18,651 views