posted by Bryon Moyer
Ever since malloc() (and it’s other-language counterparts), software engineers have had an extra verb that is foreign to hardware engineers: “destroy.”
Both software and hardware engineers are comfortable with creating things. Software programs create objects and abstract entities; hardware engineers create hardware using software-like notations in languages like Verilog. But that’s where the similarity ends. Software engineers eventually destroy that which they create (or their environment takes care of it for them… or else they get a memory leak). Hardware engineers do not destroy anything (unless intentionally blowing a metal fuse or rupturing an oxide layer as a part of an irreversible non-volatile memory-cell programming operation).
So “destroy” is not in the hardware engineer’s vocabulary. (Except in those dark recesses perambulated only on those long weekends of work when you just can’t solve that one problem…)
This is mostly not a problem, since software and hardware engineers inhabit different worlds with different rules and different expectations. But there is a place where they come together, creating some confusion for the hardware engineer: interactive debugging during verification.
SystemVerilog consists of much more than some synthesizable set of constructs. It is rife with classes from which arise objects, and objects can come and go. This is obvious to a software engineer, but for a hardware engineer in the middle of an interactive debug session, it can be the height of frustration: “I know I saw it, it was RIGHT THERE! And now it’s gone! What the…”
This was pointed out by Cadence when we were discussing the recent upgrades to their Incisive platform. The verification engineers that set up the testbenches are generally conversant in the concepts of both hardware and software, but the designer doing debug may get tripped up by this. Their point being, well, that hardware engineers need to remember that the testbench environment isn’t static in the way that the actual design is: they must incorporate “destroy” into their vocabulary.
posted by Bryon Moyer
Embedded vision systems are providing more opportunities for machines to see the world in the same way that our eyes see them (and our brains interpret them), but variants of these technologies are also enabling systems to see things in ways we can’t.
Imec has just announced a new “hyperspectral” camera system for use in medical and industrial inspection systems or anywhere specific filters are needed to understand specific characteristics of whatever is being viewed. In such situations, simply looking at one bandwidth of light may not be enough; a complement of filters may be needed either to provide a signature or to evaluate multiple characteristics at the same time.
One way this is done now is to take separate images, each with a different filter, time-domain multiplexed. This slows the overall frame rate, divided down by the number of filters. The new approach allows full-frame-rate images with all filters simultaneously.
There are actually two versions of this, which imec calls “shapshot” and “linescan.” We’ll look at snapshot first, as this is particularly new. It’s intended for image targets that are either stationary or moving in a random way (or, more specifically, not moving in a line as if on a conveyor belt). The imaging chip is overlaid with tiles, each of which is a different filter.
The filters are the last masked layers of the chip – this is monolithic integration, not assembly after the fact. The filter consists of two reflecting surfaces with a cavity between; the size of the cavity determines the frequency. This means that the final chip will actually have a non-planar surface because of the different cavity sizes – and therefore different heights – of the filter layer. Because these make up the last layers to be processed, it’s convenient for staging base wafers for final processing with custom filter patterns.
The camera lens then directs, or duplicates, the entire scene to every tile. Perhaps it’s better to think of it as an array of lenses, much like a crude plenoptic lens. This gives every filter the full image at the same time; the tradeoff against time-domain multiplexing filters over the lens instead is that you get the resolution of one tile of the imaging chip, not the entire imaging chip.
A camera optimized for linescan applications doesn’t use the plenoptic approach; instead, the tiles are made very small and, as the image moves under the camera at a known rate, multiple low-resolution images are captured and then stitched back together using computational photography techniques.
The lensing system determines the bandwidth characteristics, since you can design this with filters having wider or narrower bandwidths and with or without gaps between the filters. This allows a range from continuous coverage to discrete lines. A collimating lens will direct light in straight lines onto the filters, providing narrow bandwidth; a lens that yields conical light will give wider bandwidth due to interference from the different filters with this light. The aperture size then acts as a bandwidth knob.
Imec has put together a development kit that allows designers to figure out which filters to use for a given linescan application; they’ll be providing one for snapshot cameras as well. Each filter configuration is likely be very specific to its application, making this something of a low-volume business for now. Because of that, and in order to grow the market, imec will actually be open for commercial production of these systems.
You can find more details in their release.
posted by Bryon Moyer
You see it two to four times a year from each EDA player: “x% Productivity Gains with y Tool!” Cadence recently had such an announcement with their Incisive tool; Synopsys has just had a similar story with FineSim.
As I was talking with the Cadence folks about this, I wondered: How much of this productivity gain comes as a result of engine/algorithm improvements, and how much as a result of methodology changes? The answer is, of course, that it comes from both.
But there’s a difference in when the benefits accrue. Engine improvements are immediately visible when you run the tool. Methodology changes: not so much. And there are actually two aspects to methodology.
The first is that, of course, a new methodology requires training and getting used to. So the first project done using a new methodology will take longer; the next one should be better because everyone is used to the new way of doing things. This is a reasonably well-known effect.
But there may be an extra delayed benefit: some methodology changes require new infrastructure or have a conversion cost. If, for example, you replace some aspect of simulation with a new formal tool, you have to modify your testbench and create the new test procedure from scratch. There may be, for instance, numerous pieces of IP that need to be changed to add assertions. These are largely one-time investments, with incremental work required on follow-on projects.
In this example, it may be that, even with the conversion work, things go faster even on the first project. But productivity will be even better next time, when much of the infrastructure and changes are ready and waiting.
As to the engines, I was talking to the folks at Mentor yesterday, and wondered whether improvements to the tools themselves become asymptotic: does there come a point when you just can’t go any faster? Their answer was, “No,” since there’s always some bottleneck that didn’t used to be an issue until the other bigger bottlenecks got fixed. The stuff that got ignored keeps bubbling up in priority, the upshot being that there’s always something that can be improved to speed up the tools.