posted by Bryon Moyer
In the first major emulator news since Synopsys gobbled up EVE, Synopsys announced the next generation of the EVE platform, ZeBu 3. And, as with pretty much any emulator story, the top line has to do with capacity and performance: how much design can I cram in there and how fast will it go?
They claim industry-leading 3 MHz (with one example going as high as 3.5 MHz), as compared to what they say is a competition range more around 1-1.5 MHz (I’ll let the comps comment on whether or not that’s a representative number). As to capacity, you can stitch up to 10 of their boxes together for a total of 3 billion gates.
They also mention a number of different use modes for emulation, which are morphing as capabilities both inside and outside the emulator evolve. One in particular caught my eye because of how it contrasts with past usage.
Once upon a time, a significant use model for an emulator was to accelerate simulation. If there was a piece of the hardware that was taking too long to simulate – and in particular if it didn’t need simulator-level observability (remember: in a simulator, you can theoretically access every node; in actual hardware, you can only access those nodes that have been provisioned for access) – then you could implement that function in hardware and have the simulator call it as needed.
That ended up shining the spotlight on a significant bottleneck: handing off the function to the emulator, which required specifying pin-level signals across the interface. This led to the development of the transaction-based SCE-MI 2 interface, which abstracted away the detailed pin-level interface, making it all go so much faster.
That’s all old news. As emulator capacity and speed have improved, the focus has moved more to acceleration of software execution in SoCs. Not only does the emulator execute the software more quickly than a simulator can, features like save and restore can allow you to capture the state, say, after boot-up, and start there rather than having to go through the entire boot sequence every time. Yes ,you could theoretically do this with simulation as well, but simulating software just takes too long.
So we’ve gone from mostly verifying by simulation (on a PC) to doing much more of the verification on an emulator, now that it’s big enough. But you know… we’re never satisfied, are we? Give us an inch, and we want another inch. Yes, we can run software fast, but we don’t care about all of the software, or perhaps we don’t care about all of it in as much debug detail. Believe it or not, this software is taking too long to run on the emulator.
So what to do? How about running it on a virtual platform? Virtual platforms abstract away the low-level execution details, and so they can run much faster. So now, in a complete role reversal, the emulator can offload software execution to a PC running a virtual platform, which acts as an accelerator for the emulator – the very same emulator (or a bigger, faster version) that used to be an accelerator for the PC doing simulation. Synopsys refers to this as “hybrid mode,” one of the various use modes that ZeBu 3 supports.
What goes around…
You can get more details on all of those modes as well as the other speeds and feeds in their release.
posted by Bryon Moyer
My oh my, how the smartphone has colored our (or at least my) expectations. And color is the operative word here.
When you think, “ambient light sensor (ALS)” (yes, I know you think the parenthetical too), what do you think next? That sensor on the smartphone that can tell whether the ambient light is high or low so that it can adjust the display and keyboard backlights and such? Yeah… me too.
So then I see an announcement from Maxim about their new ALS. And it separates out colors. Now… color me stupid, but is it that someone doesn’t want their phone to react if they’re, say, in a darkroom with red light? That would make no sense – because no one uses darkrooms anymore. [Prepares for filmophile rage…]
It’s actually not that crazy; we did cover Intersil’s RGB sensor recently, and we noted there that an RGB sensor can help with tinted glass, but it was also set out in contrast to an ALS.
So here’s me feeling kinda dense, so I checked in with Maxim. Who kindly and gently helped me remember that there are more applications out there than just cellphones. Who knew.
They listed as exemplary applications:
- Color sensors
- Contrast sensors
- Color sorting
- Gas and fluid analysis
- Label presence
- Lid insertion verification
- Shrink-wrap presence
- Tamper-proof seal confirmation
- Visual inspection replacement
- Automatic display brightness
In other words, there are many sensing applications – particularly in an industrial environment – that benefit from analysis of the ambient light. And that might involve looking at very specific colors or fractions of colors. Which is why they have three color sensors and feed the separate color components for use in detection algorithms.
Now, based on the Intersil definitions, this would be an RGB sensor, as contrasted with an ALS. In Maxim’s terminology, this is an ALS. You can rationalize either nomenclature; I just feel better knowing there might be some miniscule rationale for my occasional lapses into states of confusion. (No need to suggest that they’re more than occasional, thank you anyway.)
Interestingly, though, Maxim focuses on different applications from the ones Intersil was suggesting; Intersil seemed more focused on displays (not just phones, but TVs and such), LED lighting, and cameras – less industrial in flavor than what Maxim has outlined. Which I guess goes to show that there are many ways to use an RGB sensor, or ALS with RGB, or whatever you want to call it.
On the heels of yet another light sensor that senses UV without a specific UV sensor – that is, by using a single visible light sensor that also detects into the UV range, and then extracting the UV by algorithms – I checked in to see whether this truly had three sensors or whether the colors were extracted from a single sensor. Answer: yes, three sensors.
You can find more detail in Maxim’s announcement.
posted by Bryon Moyer
OneSpin announced a Quantify MDV product a few years back. With it, they defined a number of different coverage aspects – things that could be verified with their formal technology. Now they’ve reinforced that product with a new version. And that version contains yet another coverage concept.
The older coverage concepts focused on the design itself and the quality of stimulus used in verification. It would check for things like dead code and over-constraining, the former reflecting a possible code issue and the latter indicating that legitimate cases may not be covered by existing tests. I discussed these elements in my original coverage of the tool.
In recent times, they struggled a bit with what to call these checks. You might think they’re simply “design” checks, except for the constraining bits. The aspect that gets to simulation coverage had them calling it “simulation” coverage, but that didn’t really cut it either. They landed on “reachability,” since things like dead or redundant code indicated design elements that may or may not be reachable, and the constraints also get to whether or not certain failures can be reached by the tests. It’s not a perfect nomenclature, but, absent something perfect, it’s what they settled on.
Why even worry? Well, they needed to distinguish all of those coverage aspects from a new one they were adding. This new one tests the completeness of the assertions and checkers in the design. The assertions are designed to catch problems during formal verification, but it’s possible to write ineffective assertions. Looked at another way, if assertions are poor or incomplete, then there may be code failures that could never be observed by the assertions.
So they refer to this as “observation coverage.” And they test it using a form of “mutation” analysis: making a code change and seeing if the assertion picks it up. If not, then there may be a hole in the assertion.
This appears to be a newish concept, and it’s not comprehended in the UCIS coverage standard; they’re in discussions on that.
You can get a more complete picture of their latest Quantify release in their announcement.