feature article
Subscribe Now

Taming Process Development

Process Relations Attempts to Minimize Experiments

– You mean we’ve got to do another run?

– Yeah, we decided we didn’t quite check enough temperature points, so we’ve got one more experiment here optimizing that.

– Hmmmm… ok, yeah, I guess I see where I suppose we need that… But damn… I could swear we already did some work on this somewhere. Didn’t Alex do something like this?

– Well, yeah, but he was trying out some other things at the same time during the experiment, and they didn’t work, so he didn’t bother archiving all the information.

– Oh jeez… so it’s gonna take us a couple months to recreate what we could already have had. Are you sure it’s not somewhere on his hard drive?

– Um… might be… I don’t know if you ever wandered by his cube late at night while he was working. Let’s just say I really really don’t want to have to go poking around on his hard drive looking for stuff. I’m really uncomfortable with what I might find.

Oh, wait! Maybe I still have some of the emails he sent around with his results. He might have attached some results files….. Hmmm… no, looks like those emails have already been deleted. It was too long ago. Too bad.

– OK, then I guess we have to do the experiment. Seems like a waste. Oh, and remind those guys to be particularly careful with the run cards. We almost lost an expensive piece of equipment last week due to contamination from some mix-up in the flow they were experimenting with. Those guys aren’t reading carefully, and they’re being a bit sloppy when filling in the run cards.

New process development is an expensive proposition. Flows are getting longer and more complex, and each step of the way presents multiple opportunities to tune and optimize the flow – which also means multiple opportunities to get confused and screw things up and waste wafers and require new expensive mask sets. I know… Mr. Negative talkin’ here…

We can simplify the process as follows:

  1. Decide what information is needed.
  2. Design experiment(s).
  3. Give line operators instructions on how to run experiments.
  4. Run experiments, generating untold volumes of data.
  5. Re-run experiments if something got screwed up along the way.
  6. Sift through the data looking for useful information.
  7. Draw conclusions and write and distribute a report if the conclusions are useful.
  8. Rinse and repeat until ready for production.

There are a couple of missing steps here. After figuring out what information is needed, it would be nice if there were a way to see whether the information is already available based on work already done. After designing the experiment, it would be good to confirm whether the process steps in the experiment use the equipment in a way that causes no harm. And after drawing conclusions, it would be useful to archive the data – whether or not the experiment was considered a success.

This isn’t to suggest that no one does these things; they probably are done, just in an ad hoc manner. Because, realistically, the amount of data to be managed is enormous, and even if you were to comb through the results of all “successful” experiments, it may well take longer than just running the dang experiment. So you check here and there and, if there’s some low-hanging fruit somewhere, you pick it; otherwise you decide to grow your own orchard from scratch because it’s easier than trying to get to the high-hanging fruit.

What’s worse is that, typically, the guys developing the process do their work, and, once the process moves into the fab for production, the knowledge gained during the development work more or less disappears. People move on, data is captured here or there on personal hard drives or home directories on servers, decisions are made by email and are lost through email retention policies. And so when, for example, someone wants to tweak the process a little bit to optimize yield, new work must be done when, in fact, that very same variable may have already been evaluated during development.

This is the scenario portrayed by Process Relations, a company focusing on managing the process of process experimentation. Their goal is to put some context-aware structure behind the volumes of data generated in process experiments so that information can be readily gleaned at any time and that new experiments can be avoided if possible.

The focus at present is on thin-film processing for both silicon IC and MEMS work. The overriding aim of the system can be over-simplified as follows: run as few actual experiments as possible. This is accomplished in two ways: get as much information as you can without running an experiment, and make sure the experiment is run correctly so that you don’t need to re-run portions of it due to some foul-up (including possible equipment damage).

Part of the philosophy behind eliminating unnecessary experiments hinges on the fact that failed experiments provide useful information. But we get hung up on the highly-charged word “fail” and don’t go through the extra effort to document such experiments. But, in fact, by creating an environment where everything can be stored more easily, all experiments can be recorded and their results called up later.

We’re talking lots of data, however. Each experiment will have a number of lots, each of which has a number of wafers, each of which has a number of dice, each of which may have lots of measurements and artifacts like SEM pictures. Not to mention emails discussing the reasoning behind how the experiment was designed and aspects of the results. All of this can be stored in a database where relationships are established so that, sometime in the future, someone can run a query looking for “… all of the wafers running this particular process step with these temperature values…” and the results across all experiments in the system can be mined for golden nuggets.

If it turns out you don’t have the information you need, then you can design an experiment. But before actually running real silicon, you can learn some things by emulating the experiment – that is, using modeling and empirical data to do a rough simulation of the experiment – or actually doing a TCAD run. These may help you get the information you need without running real silicon.

If you decide you actually need to run wafers, you can then validate the processing steps you’ve set up. Each of the machines used in your flow has a set of operating ranges and other restrictions – cleaning steps in advance, forbidden materials, etc. – that can be codified. By entering your specific experimental flow and validating it, you can ensure that none of what you want to do violates any of these restrictions. In addition, it can then print out run cards automatically, helping to eliminate the chance that someone manually copies the information onto the run card incorrectly, removing yet another potential source of ruined wafers or damaged equipment.

Thus armed, the experiment flow above might be changed to the following:

  1. Decide what information is needed.
  2. Check to see if the information already exists. If so, you’re done.
  3. Design experiment(s).
  4. Emulate or simulate the experiment. If that provides the information you need, you’re done.
  5. Check the process to make sure it’s valid.
  6. Run experiments. Don’t re-run experiments.
  7. Load results into database.
  8. View information.
  9. Draw conclusions.
  10. Rinse and repeat fewer times until ready for production.

Yes, that looks like more steps, but you’ve got a couple of critical early-escape points. And loading the info into the database not only makes data available for others to use later on, but should even make it easier to evaluate your own experiment right away.

So, if all of this works as promised, then, rather than surveying the smoking hulk of a deposition machine after some PhD intern screwed up the settings while being distracted by late-night browsings that were depositing questionable cache files in and amongst the soon-to-be experimental results that someone would then need latex gloves to explore long after the intern had skittered back home, you have a centralized, organized cache of results that can be queried at a high level, from good experiments and bad, and a way to help ensure that future experiments are necessary and go according to plan.

Link: Process Relations

Leave a Reply

featured blogs
May 21, 2022
May is Asian American and Pacific Islander (AAPI) Heritage Month. We would like to spotlight some of our incredible AAPI-identifying employees to celebrate. We recognize the important influence that... ...
May 20, 2022
I'm very happy with my new OMTech 40W CO2 laser engraver/cutter, but only because the folks from Makers Local 256 helped me get it up and running....
May 19, 2022
Learn about the AI chip design breakthroughs and case studies discussed at SNUG Silicon Valley 2022, including autonomous PPA optimization using DSO.ai. The post Key Highlights from SNUG 2022: AI Is Fast Forwarding Chip Design appeared first on From Silicon To Software....
May 12, 2022
By Shelly Stalnaker Every year, the editors of Elektronik in Germany compile a list of the most interesting and innovative… ...

featured video

Intel® Agilex™ M-Series with HBM2e Technology

Sponsored by Intel

Intel expands the Intel® Agilex™ FPGA product offering with M-Series devices equipped with high fabric densities, in-package HBM2e memory, and DDR5 interfaces for high-memory bandwidth applications.

Learn more about the Intel® Agilex™ M-Series

featured paper

Introducing new dynamic features for exterior automotive lights with DLP® technology

Sponsored by Texas Instruments

Exterior lighting, primarily used to illuminate ground areas near the vehicle door, can now be transformed into a projection system used for both vehicle communication and unique styling features. A small lighting module that utilizes automotive-grade digital micromirror devices, such as the DLP2021-Q1 or DLP3021-Q1, can display an endless number of patterns in any color imaginable as well as communicate warnings and alerts to drivers and other vehicles.

Click to read more

featured chalk talk

Using Intel FPGA to Develop Video and Vision Solutions

Sponsored by Mouser Electronics and Intel

Today’s video applications require enormous amounts of compute performance on small power budgets. And, the wide variety of specifications, rates, and resolutions makes flexibility a key design requirement. In this episode of Chalk Talk, Amelia Dalton chats with Omi Oliyide of Intel about how Intel FPGAs are ideal to take on even the most challenging video and vision designs, and explain how you can get started with this exciting technology in your next project.

More information about Intel Arria® 10 GX FPGA Development Kit