feature article
Subscribe Now

Taming Process Development

Process Relations Attempts to Minimize Experiments

– You mean we’ve got to do another run?

– Yeah, we decided we didn’t quite check enough temperature points, so we’ve got one more experiment here optimizing that.

– Hmmmm… ok, yeah, I guess I see where I suppose we need that… But damn… I could swear we already did some work on this somewhere. Didn’t Alex do something like this?

– Well, yeah, but he was trying out some other things at the same time during the experiment, and they didn’t work, so he didn’t bother archiving all the information.

– Oh jeez… so it’s gonna take us a couple months to recreate what we could already have had. Are you sure it’s not somewhere on his hard drive?

– Um… might be… I don’t know if you ever wandered by his cube late at night while he was working. Let’s just say I really really don’t want to have to go poking around on his hard drive looking for stuff. I’m really uncomfortable with what I might find.

Oh, wait! Maybe I still have some of the emails he sent around with his results. He might have attached some results files….. Hmmm… no, looks like those emails have already been deleted. It was too long ago. Too bad.

– OK, then I guess we have to do the experiment. Seems like a waste. Oh, and remind those guys to be particularly careful with the run cards. We almost lost an expensive piece of equipment last week due to contamination from some mix-up in the flow they were experimenting with. Those guys aren’t reading carefully, and they’re being a bit sloppy when filling in the run cards.

New process development is an expensive proposition. Flows are getting longer and more complex, and each step of the way presents multiple opportunities to tune and optimize the flow – which also means multiple opportunities to get confused and screw things up and waste wafers and require new expensive mask sets. I know… Mr. Negative talkin’ here…

We can simplify the process as follows:

  1. Decide what information is needed.
  2. Design experiment(s).
  3. Give line operators instructions on how to run experiments.
  4. Run experiments, generating untold volumes of data.
  5. Re-run experiments if something got screwed up along the way.
  6. Sift through the data looking for useful information.
  7. Draw conclusions and write and distribute a report if the conclusions are useful.
  8. Rinse and repeat until ready for production.

There are a couple of missing steps here. After figuring out what information is needed, it would be nice if there were a way to see whether the information is already available based on work already done. After designing the experiment, it would be good to confirm whether the process steps in the experiment use the equipment in a way that causes no harm. And after drawing conclusions, it would be useful to archive the data – whether or not the experiment was considered a success.

This isn’t to suggest that no one does these things; they probably are done, just in an ad hoc manner. Because, realistically, the amount of data to be managed is enormous, and even if you were to comb through the results of all “successful” experiments, it may well take longer than just running the dang experiment. So you check here and there and, if there’s some low-hanging fruit somewhere, you pick it; otherwise you decide to grow your own orchard from scratch because it’s easier than trying to get to the high-hanging fruit.

What’s worse is that, typically, the guys developing the process do their work, and, once the process moves into the fab for production, the knowledge gained during the development work more or less disappears. People move on, data is captured here or there on personal hard drives or home directories on servers, decisions are made by email and are lost through email retention policies. And so when, for example, someone wants to tweak the process a little bit to optimize yield, new work must be done when, in fact, that very same variable may have already been evaluated during development.

This is the scenario portrayed by Process Relations, a company focusing on managing the process of process experimentation. Their goal is to put some context-aware structure behind the volumes of data generated in process experiments so that information can be readily gleaned at any time and that new experiments can be avoided if possible.

The focus at present is on thin-film processing for both silicon IC and MEMS work. The overriding aim of the system can be over-simplified as follows: run as few actual experiments as possible. This is accomplished in two ways: get as much information as you can without running an experiment, and make sure the experiment is run correctly so that you don’t need to re-run portions of it due to some foul-up (including possible equipment damage).

Part of the philosophy behind eliminating unnecessary experiments hinges on the fact that failed experiments provide useful information. But we get hung up on the highly-charged word “fail” and don’t go through the extra effort to document such experiments. But, in fact, by creating an environment where everything can be stored more easily, all experiments can be recorded and their results called up later.

We’re talking lots of data, however. Each experiment will have a number of lots, each of which has a number of wafers, each of which has a number of dice, each of which may have lots of measurements and artifacts like SEM pictures. Not to mention emails discussing the reasoning behind how the experiment was designed and aspects of the results. All of this can be stored in a database where relationships are established so that, sometime in the future, someone can run a query looking for “… all of the wafers running this particular process step with these temperature values…” and the results across all experiments in the system can be mined for golden nuggets.

If it turns out you don’t have the information you need, then you can design an experiment. But before actually running real silicon, you can learn some things by emulating the experiment – that is, using modeling and empirical data to do a rough simulation of the experiment – or actually doing a TCAD run. These may help you get the information you need without running real silicon.

If you decide you actually need to run wafers, you can then validate the processing steps you’ve set up. Each of the machines used in your flow has a set of operating ranges and other restrictions – cleaning steps in advance, forbidden materials, etc. – that can be codified. By entering your specific experimental flow and validating it, you can ensure that none of what you want to do violates any of these restrictions. In addition, it can then print out run cards automatically, helping to eliminate the chance that someone manually copies the information onto the run card incorrectly, removing yet another potential source of ruined wafers or damaged equipment.

Thus armed, the experiment flow above might be changed to the following:

  1. Decide what information is needed.
  2. Check to see if the information already exists. If so, you’re done.
  3. Design experiment(s).
  4. Emulate or simulate the experiment. If that provides the information you need, you’re done.
  5. Check the process to make sure it’s valid.
  6. Run experiments. Don’t re-run experiments.
  7. Load results into database.
  8. View information.
  9. Draw conclusions.
  10. Rinse and repeat fewer times until ready for production.

Yes, that looks like more steps, but you’ve got a couple of critical early-escape points. And loading the info into the database not only makes data available for others to use later on, but should even make it easier to evaluate your own experiment right away.

So, if all of this works as promised, then, rather than surveying the smoking hulk of a deposition machine after some PhD intern screwed up the settings while being distracted by late-night browsings that were depositing questionable cache files in and amongst the soon-to-be experimental results that someone would then need latex gloves to explore long after the intern had skittered back home, you have a centralized, organized cache of results that can be queried at a high level, from good experiments and bad, and a way to help ensure that future experiments are necessary and go according to plan.

Link: Process Relations

Leave a Reply

featured blogs
Mar 27, 2024
The current state of PCB design is in the middle of a trifecta; there's an evolution, a revolution, and an exodus. There are better tools and material changes, there's the addition of artificial intelligence and machine learning (AI/ML), but at the same time, people are leavi...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Power Gridlock
The power grid is struggling to meet the growing demands of our electrifying world. In this episode of Chalk Talk, Amelia Dalton and Jake Michels from YAGEO Group discuss the challenges affecting our power grids today, the solutions to help solve these issues and why passive components will be the heroes of grid modernization.
Nov 28, 2023
16,367 views