posted by Bryon Moyer
In the wake of the UCIS announcement at DAC (which we’ll cover separately later), I sat down with some of Mentor’s functional verification folks to get an update. Coverage was one of the items on their agenda as part of addressing metric-driven verification.
They talk in terms of changing the engineering mindset when it comes to evaluating verification tools. Right now engineers tend to think in terms of “cycles/second”: how fast can you blaze through these vectors? Mentor is trying to change that thought process to “coverage/cycle”: it’s ok to take longer per cycle (OK, actually, they didn’t explicitly say that – probably a bit dodgy territory from a marketing standpoint – and I don’t know whether they’re solution is any slower on a per-cycle basis – but I’m inferring here…) as long as you get coverage faster. In other words, maybe one tool can zip through a bazillion vectors in three hours, but it’s better to have a tool that only needs a half-bazillion vectors and completes in two hours (slower on a per-vector basis, but faster overall completion).
Part of this is handled by their InFact “intelligent testbench.” They try to solve two problems with it, as I see it. First, there are hard-to-reach states in any design; the tool builds a graph of the design for use in identifying trajectories. From that, they should be able to reach any reachable state with the fewest vectors possible. Which is fine when testing just that one state.
But the second thing they do is what would appear to be their own variation of the “traveling salesman” problem. How do you traverse the graph to get to all the nodes without repeating any path? (The canonical traveling salesman problem is about not repeating any node and ending back where you started.) The idea is to get full coverage with as few vectors as possible. This gets specifically to the “coverage/cycle” metric.
Which reinforces the old truth that simply having and rewarding metrics doesn’t necessarily help things. It’s too easy to have the wrong metrics – which will be attained and for which rewards will be paid – and not improve life. Because they’re the wrong metrics.
Perhaps MDV should be modified to UMDV: Useful-Metric-Driven Verification. Of course, then we’ll get to watch as companies battle over which metrics are useful. But that could make for entertaining viewing too…
posted by Bryon Moyer
The use of photons as signal carriers has historically gone towards long-distance transport, either over the air (feels like waves more than photons) or within fiber. But the distances of interest have dropped dramatically, to the point where there are discussions of using silicon photonics even for on-chip signaling.
In a conversation at Semicon West with imec’s Ludo Deferm, we discussed their current work. At this point, and for at least 10 years out, he doesn’t see CMOS and photonics co-existing on the same wafer. The bottleneck right now isn’t on-chip; it’s chip-to-chip. 40-60 Gb/s internally is fine for now. Which suggests the use of photonics in a separate chip in, for example, a 3D-IC stack or on an interposer: one for routing signals between the chips in the stack.
That photonic chip would be made with the same equipment as a CMOS chip – a specific goal of the imec work in commercializing silicon photonics, but it starts with a different wafer: SOI, with a thinner silicon layer than you would have in a typical CMOS wafer, and with that thickness (or thinness) tightly controlled to reduce optical losses.
You can read more about imec’s progress in their recent announcement.
posted by Bryon Moyer
When last we talked with Cymer, they had just announced their PrePulse technology that gets more of the energy out of the droplets they blast with a laser. They had achieved 50-W output.
That’s only half-way to what’s needed for production, and, at the time, it was an “open-loop” result. That is, not something that could be repeated over and over in a production setting.
In my discussion with them at Semicon West, they now have 50 W working on a sustained, closed-loop basis (for five hours). And they have achieved 90 W in short open-loop bursts.
But there are lots of other characteristics besides simple power that are important for production viability.
- Duty cycle: after they run the system for a while, things heat up. Literally. For that and a number of reasons, they have to give the machine a break or else the power rolls off. Right now they’re running at 40% duty cycle; they’re working to get that (closer) to 100%.
- Dose stability: their five-hour runs have resulted in 90% of dice having less than 1% dose error.
- Availability: if the machine is always down or needs lots of maintenance, well, that’s a problem. They’re now claiming 70% up-time.
- Collector longevity: at some point, having been bombarded with pulses, the collector will start to lose reflectivity. It would then need to be replaced – meaning downtime and cost. So far they say that they’ve gone above 30 billion pulses without seeing any reflectivity degradation.
Meanwhile, efforts to increase power depend on three separate factors: input power, “conversion” efficiency – how much of that input power gets released from a pulsed droplet, and collector efficiency.
Their PrePulse technology has satisfied them on the second item; their efforts at this point are in increasing the input power (they’ve demonstrated up to 17 kW) and improving collector efficiency. This takes place in what they call their “HVM II” model, which is being integrated now.