feature article
Subscribe Now

From Movies to Data Analysis

Coventor’s SEMulator 3D Pivots

If you’ve been paying attention to the various papers at various advanced semiconductor process conferences, there’s a name you’re seeing more and more: Coventor. We’ve looked at them several times before, in the context of both their SEMulator 3D tool and their MEMS+ tool – the former for development of new semiconductor processes and the latter for designing MEMS devices.

Today we’re focusing on the SEMulator 3D tool, whose 6.1 version was recently announced. We’re doing so because the tool has turned a corner on how it’s used. Before going there, let’s talk about where we’ve been first in order to set the stage.

SEMulator 3D is probably most visible through its ability to animate a process flow. Start with some new idea and then simulate it to watch as a virtual wafer gets built. Need to prove that idea to someone else? An investor or a prospect, perhaps? Make a movie of the animation and display it. Convincing proof – and it looks slick. That paper presentation you saw where they animated some part of the process? Good chance that was the work of SEMulator 3D.

But, of course, that’s never been enough. Because as soon as you do one of those, you’re thinking, “Wait, what if I tweak this?” – and off you go again making a new movie, just like a regular Quentin Tarantino (but with less color saturation and less blood). And then you have to deal with variation within some of these tiny, complex features that might act one way on a Sunday and another on a Monday. And so now you have to explore the corners. More movies! Call it a trilogy! A pentology! A myriology!

While those might not be real words, that last one does hint at the scale of what’s going on – “myriad” originally indicating ten thousand (and, for the record, traditionally used as an adjective, not a noun – “We ran myriad experiments,” not “We ran a myriad of experiments” – although, as is the way with the English language, the more people do it wrong, the more that becomes acceptable). Because that’s what’s happening.

Various Solutions to Variation

How we deal with variation has changed over time. Once upon a time, we actually tried to get rid of it. I know – quaint, huh? Then we tried designing to a central process point and simulating corners to make the design resilient in the face of process variation. Now, according to Coventor, that last one has been turned on its head: we start with the corners and find where the best nominal settings will be.

But there’s one thing required to do a good job at that: data. Lots of data. And the big question then becomes, “How best to get that data?” Traditionally, we’ve run wafers, turning various process knobs and measuring the outcomes. But that may mean stealing precious fab time from manufacturing or some other development. And each run takes a long time to process fully. And, while a physical build might seem like the gold standard for getting solid, reliable data, there’s a limit to what you can do.

In reality, you want more than just outcomes. You want all kinds of in-process and internal data as well so that you learn not only what happened, but you also get some clues as to why it happened – and what to change if the outcome isn’t ideal. So you instrument your dice and your wafers as best you can to give you those insights.

But this then becomes familiar territory for EDA types. In fact, historically, it’s been one of the challenges of emulation as opposed to simulation. You get way more speed with emulation, but you want the richness of data access that, in the old days, only simulation could give. Emulators have evolved to address this visibility consideration, but it takes hardware to do that. The thing about simulation – whether circuit or process – is that it’s all make-believe. You can wave your hand and instrument pretty much anything – there’s no node that’s inaccessible.

But here’s where the EDA/process similarities end. While emulators have bulked up on the hardware instrumentation that’s possible, fabs haven’t done so in the same way. So, if you really want all that internal data, process simulation is still the best way to get it.

And get it they have. We saw in a prior piece (the one linked above) that Coventor worked with Imec on process simulation. And they ran 1.4 million wafers – virtually. Yeah, it took about 5 weeks. But that’s for 1.4 million wafers – imagine how long that would take to run in a real fab.

And imagine looking at 1.4 million animations. Or manually scouring the intimate data details from the runs of 1.4 million wafers. This is where the new problem emerges, filed under “B” for “Be careful what you wish for.” Now you’re awash in data – what are you going to do about that?

I Need a Statistician – STAT!

Yup, that’s right. You do what any rightminded technologist would do: bail and hand the problem to someone who knows how to extract insights from large volumes of data: a statistician. Turns out that even many large companies don’t employ in-house statisticians, so, presumably, there’s a robust consulting market for such services.

And, in fact, Coventor did engage the services of statisticians to help them understand how SEMulator 3D could be made to analyze large volumes of data. And that data analysis capability is now a big part of the latest release. Big Data attacks yet another corner of the design tool world.

And we are talking data that’s big. How much data you get per wafer depends, of course, on how much you instrument the wafer – in a virtual sense. But if you’re looking for causes and effects and perhaps second- or third-order behaviors, you need to watch lots of points to catch the nuance. Yeah, many of those points will end up not contributing to a final solution, but you don’t know which data is relevant until you know the answer.

So picture heavily instrumented wafers that are subject to, say, 100,000 overall variations. You’re going to be scooping up data like Ocean Spray scoops up cranberries. And you have to put it someplace quickly so that you can keep up, and then you need to analyze it. It’s that analysis that’s new.

You might wonder what impact so much data analysis might have on the software run times. But Coventor says that it’s not the rate-limiting step – far from it. Running the simulation and capturing the data – much of which can be parallelized – is still the vast bulk of the work to be done. They say that analysis probably scales differently from the data gathering, but that it’s a small enough portion of overall runtime that they’re not focusing on that at the moment.

So there you have it. Once a useful tool for turning out particularly good visualizations of what’s happening to a single wafer as it’s built, SEMulator 3D has now pivoted to the less glamorous, but more powerful, work of making sense out of gobs of data from myriad wafers.

 

More info:

Coventor SEMulator 3D

2 thoughts on “From Movies to Data Analysis”

  1. The player is getting great internal data and process of play. The clues and insight signals are available for this. Converter and good accessible process steps are given for this post. The intimate level of data to be shared for the audience. The circuit of simulation and speed to be covered with good gate way. The scouring data given great details for users. Clear sources and well description is possible to get great outsources with this. Collect more from trusted essay writing company

Leave a Reply

featured blogs
Jun 14, 2021
By John Ferguson, Omar ElSewefy, Nermeen Hossam, Basma Serry We're all fascinated by light. Light… The post Shining a light on silicon photonics verification appeared first on Design with Calibre....
Jun 14, 2021
As a Southern California native, learning to surf is a must. Traveling elsewhere and telling people you’re from California without experiencing surfing is somewhat a surprise to most people. So, I have decided to take up surfing. It takes more practice than most people ...
Jun 14, 2021
The Cryptographers' Panel was moderated by RSA's Zulfikar Ramzan, and featured Ron Rivest (the R of RSA), Adi Shamir (the S of RSA), Ross Anderson (professor of security engineering at... [[ Click on the title to access the full blog on the Cadence Community site. ...
Jun 10, 2021
Data & analytics have a massive impact on the chip design process; we explore how fast/precise chip data analytics solutions improve IC design quality & yield. The post The Importance of Chip Manufacturing & Test Data Analytics in the Semiconductor Industry ap...

featured video

Reduce Analog and Mixed-Signal Design Risk with a Unified Design and Simulation Solution

Sponsored by Cadence Design Systems

Learn how you can reduce your cost and risk with the Virtuoso and Spectre unified analog and mixed-signal design and simulation solution, offering accuracy, capacity, and high performance.

Click here for more information about Spectre FX Simulator

featured paper

4 common questions when isolating signal and power

Sponsored by Texas Instruments

A high-voltage circuit design requires isolation to protect human operators, enable communication to lower-voltage circuitry and eliminate unwanted noise within the system. Many options are available when designing a power supply for digitally isolated circuits including; flyback, H-bridge LLC, push-pull, and integrated isolated data and power solutions. This article explores common questions when isolating signal and power in a design as well as a brief overview of available power solutions.

Click to read more

Featured Chalk Talk

Benefits of FPGAs & eFPGA IP in Futureproofing Compute Acceleration

Sponsored by Achronix

In the quest to accelerate and optimize today’s computing challenges such as AI inference, our system designs have to be flexible above all else. At the confluence of speed and flexibility are today’s new FPGAs and e-FPGA IP. In this episode of Chalk Talk, Amelia Dalton chats with Mike Fitton from Achronix about how to design systems to be both fast and future-proof using FPGA and e-FPGA technology.

Click here for more information about the Achronix Speedster7 FPGAs