feature article
Subscribe Now

Making Quality Everyone’s Business

A Quick Look at isQED

Nestled amongst the big noisy conventions like CES, ISSCC, and DAC can be found some more modest, highly focused conferences. These shows may cast a smaller shadow, but they may also benefit from the lack of attendant hoopla, since marketing pays less attention and engineers can focus on the business at hand. One such show that just took place was isQED, or the International Symposium on Quality Electronics Design. Now in its ninth year, isQED focuses on the interactions between design, test, quality, and manufacturing disciplines in the effort to improve such aspects as yield, quality, and robustness.

The technical sessions were dominated by university presentations and were highly focused. Sharing time with these were a number of higher-level industry presentations that were clearly trying to tread a fine line between presenting a topic of general relevance and featuring the companies’ products. Somewhat surprisingly, Microsoft was the first up. This isn’t a company that one usually expects to see at a smallish conference focused on semiconductors. But from their presentation it’s clear that they’re looking to develop a unified collaboration platform to bring together all aspects of system design and sales, including engineering (of course), manufacturing, marketing, field personnel, and even customers (via appropriate firewalls, presumably). Whether they’re able to leverage success in this market remains to be seen, but it appears that, one way or another, they plan to make some noise.

Robert Hum, Mentor’s VP and GM of Design Verification and Test, discussed verification issues, honing in on the challenges of identifying failure modes and applying that information back to design. Physical failure analysis (PFA) is expensive and time consuming. Given a number of failures and the huge number of possible failure modes, just trying to jump into PFA may not be productive. He discussed an automated methodology involving datalogging failures, amassing the results of large numbers of datalogs to identify and Pareto trends along with modeled failure modes, and then using that information to decide what to send into PFA and where to look in order to zero in on the failure mode more quickly.

Cadence’s VP and GM of their test business unit, Sanjiv Taneja, focused on test and the increasing awareness that Design For Test (DFT) strategies need regarding such issues as power and physical layout. In fact, he used the phrase Design With Test, implying that such design elements as scan insertion should feature more prominently as bona fide design issues rather than being an afterthought once the design is basically done. Test pattern generation can also have an impact on power – poorly-generated vectors may draw too much power based on nonsensical switching in “don’t-care” signals, and this power could cause good chips to fail. Vectors also need to take into account some of the newer power infrastructure elements like retention registers and level shifters.

Chandu Visweswariah, a research staff member from IBM – the only plenary presenter, he noted, without a VP title – provided a unique angle on robustness, posing some interesting open questions that have yet to be solved. Making extensive use of statistics, he addressed such issues as establishing an individual baseline for each chip that a tester should use when testing the 3-sigma limits of various parameters. He noted that fully evaluating the sensitivity of a design to changes in critical parameters becomes important, both for tolerating process variability and for ensuring that shipped devices work. He also discussed some sophisticated approaches to managing yield curves for purposes of binning and price-point management, the ultimate mathematics of which have not yet been worked out.

Finally, Synopsys’s VP of Strategic Alliances (and CEO of the somewhat confusing-sounding entity Synopsys Armenia, USA) did an overview of the requirement for green electronics that was for the most part very high level, with brief coverage of low-power techniques.

Design for Meaningfulness

One of the more interesting discussions was on opening night: a panel discussion addressing whether Design for Manufacturability (DFM) – a huge buzzword in IC design – is helping or hurting. Right off the bat, it was clear that DFM is not well defined. And when it gets right down to it, lots of existing practices, from the venerable Design Rule Check (DRC) to Optical Proximity Correction (OPC) are intended to make a device manufacturable. But they’re not considered DFM. Well, ok, some people do consider OPC to be part of DFM – perhaps because it’s newer? Chartered Semiconductor’s VP of Design Enablement Alliances Walter Ng defined DFM as “leveraging manufacturing data to get a more robust design.” Slightly less reverently, Dr. Riko Radojcic, Qualcomm’s leader of Design-for-Si initiatives, described it as “the currently sexy name given to practices for process/design integration.” The bottom line is that there are a bunch of things that, based on process knowledge, can be done during design to improve the likelihood that a device will yield well early on.

High early yields are important because with product lifetimes being so short, there is really no longer enough time for devices to go through the usual learning curve, with poor initial yields that gradually improve as experience grows. Extensive process knowledge based on a variety of designs has to be translated into more general practices that can be applied to new chips that have no history so that they can hit the ground running. One challenge here is that this requires lots and lots of data to be taken by the foundries and provided back to the designers so they can take appropriate steps in their designs. This is clearly more work for the foundries, not to mention the fact that they’ve had to move from jealously guarding their data to being open with it – even in the early stages when it doesn’t look good. Meanwhile, EDA companies have had to start working a couple process nodes ahead now so that they operate in parallel with the foundries, making the data available earlier.

At the same time, they did agree that there’s some tension with designers who naturally want to get going as soon as possible; waiting for extra data takes time. A key approach here is adapting to a methodology that starts with early rough data and accommodates refreshes of the data throughout the design process as more and better data become available, without breaking things or causing undue amounts of rework. Richard Brashears, Cadence’s VP of Manufacturing Modeling and Implementation Technology, noted that handling such fluid data properly really requires a ground-up re-design of EDA architectures, with new data structures and algorithms. The impact of data delivery notwithstanding, all agreed that even if design has to start a bit late, time-to-yield improves, so the net effect is an acceleration of volume production.

Philosophy aside, the reality is that DFM is perceived as a bunch of point tools, many of which are marketed as being an insurance policy. The foundries have done “A/B” experiments with some techniques – that is, running fab lots with and without the technique – to demonstrate their effectiveness. But this hasn’t been done for every technique – it’s expensive to do a custom design and then run enough wafers to get statistically significant results. So some tools rely on somewhat more circumstantial evidence that their techniques should help avoid this or that possible process issue. Because they can’t point to specific results, saying, “we improved yield by this much” (and even happy customers can’t provide data, since those customers didn’t do a control run without the tool), the story becomes more tenuous. That is not to say that such tools have no value – they may indeed be very useful, but it’s harder to convince tight-budgeted engineering teams if you don’t have definitive data.

The other element holding back what might be more effusive embracement of DFM is the lack of integration and interoperability. Given the number of point tools, methodologies become somewhat scattershot as very specific tools are plugged in here or there. The call for standards was unanimous, and there was general expectation that at some point, many of the point tools will end up subsumed within larger environments.

Mr. Brashears also pointed out that the EDA business model will be challenged to evolve with the fact that there will be fewer and fewer traditional license seats responsible for higher and higher volumes of chips. Without giving any specific proposals, he simply noted that new creative ways will be needed for monetizing the value of what are increasingly sophisticated tools and environments.

Whatever happened to…

In a similar spirit, there was another panel session that asked the question whether anyone was using ESL 2.0. There really wasn’t a lot of consensus on what’s happening here, and again, not even a well-accepted definition of what constitutes ESL. There was lots of discussion of the intended (and largely undisputed) benefits of ESL, but that’s kind of the way ESL has always been. The providers have spent lots of energy articulating their value, but it has been very much of a provider-push process, with little consumer pull. There was nothing in the isQED session that left the impression that this has changed — unless you take into account the candid declaration by EVE’s Donald Cramb that the world’s most popular ESL tool is Microsoft Excel.

One final note: isQED did something unusual with a couple sessions – an experiment that hopefully won’t be repeated – by having one room with exhibits and two presentations, all going at the same time. I now know what it’s like to be a cell phone, trying to discriminate an inordinate number of signals and reflections, some useful, most not. Until my brain is fitted with the ability to do code division multiplexing, I’ll leave my vote in place for one presentation, one room.

Leave a Reply

featured blogs
Dec 2, 2024
The Wi-SUN Smart City Living Lab Challenge names the winners with Farmer's Voice, a voice command app for agriculture use, taking first place. Read the blog....
Dec 3, 2024
I've just seen something that is totally droolworthy, which may explain why I'm currently drooling all over my keyboard....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

SiC-Based High-Density Industrial Charger Solutions
Sponsored by Mouser Electronics and onsemi
In this episode of Chalk Talk, Amelia Dalton and Prasad Paruchuri from onsemi explore the benefits of silicon carbide based high density industrial charging solutions. They investigate the topologies of Totem Pole PFC and Half Bridge LLC circuits, the challenges that bidirectional CLLC resonant DC-DC converters are solving today, and how you can take advantage of onsemi’s silicon carbide charging solutions for your next design.
May 21, 2024
37,623 views