feature article
Subscribe Now

Taming Process Development

Process Relations Attempts to Minimize Experiments

– You mean we’ve got to do another run?

– Yeah, we decided we didn’t quite check enough temperature points, so we’ve got one more experiment here optimizing that.

– Hmmmm… ok, yeah, I guess I see where I suppose we need that… But damn… I could swear we already did some work on this somewhere. Didn’t Alex do something like this?

– Well, yeah, but he was trying out some other things at the same time during the experiment, and they didn’t work, so he didn’t bother archiving all the information.

– Oh jeez… so it’s gonna take us a couple months to recreate what we could already have had. Are you sure it’s not somewhere on his hard drive?

– Um… might be… I don’t know if you ever wandered by his cube late at night while he was working. Let’s just say I really really don’t want to have to go poking around on his hard drive looking for stuff. I’m really uncomfortable with what I might find.

Oh, wait! Maybe I still have some of the emails he sent around with his results. He might have attached some results files….. Hmmm… no, looks like those emails have already been deleted. It was too long ago. Too bad.

– OK, then I guess we have to do the experiment. Seems like a waste. Oh, and remind those guys to be particularly careful with the run cards. We almost lost an expensive piece of equipment last week due to contamination from some mix-up in the flow they were experimenting with. Those guys aren’t reading carefully, and they’re being a bit sloppy when filling in the run cards.

New process development is an expensive proposition. Flows are getting longer and more complex, and each step of the way presents multiple opportunities to tune and optimize the flow – which also means multiple opportunities to get confused and screw things up and waste wafers and require new expensive mask sets. I know… Mr. Negative talkin’ here…

We can simplify the process as follows:

  1. Decide what information is needed.
  2. Design experiment(s).
  3. Give line operators instructions on how to run experiments.
  4. Run experiments, generating untold volumes of data.
  5. Re-run experiments if something got screwed up along the way.
  6. Sift through the data looking for useful information.
  7. Draw conclusions and write and distribute a report if the conclusions are useful.
  8. Rinse and repeat until ready for production.

There are a couple of missing steps here. After figuring out what information is needed, it would be nice if there were a way to see whether the information is already available based on work already done. After designing the experiment, it would be good to confirm whether the process steps in the experiment use the equipment in a way that causes no harm. And after drawing conclusions, it would be useful to archive the data – whether or not the experiment was considered a success.

This isn’t to suggest that no one does these things; they probably are done, just in an ad hoc manner. Because, realistically, the amount of data to be managed is enormous, and even if you were to comb through the results of all “successful” experiments, it may well take longer than just running the dang experiment. So you check here and there and, if there’s some low-hanging fruit somewhere, you pick it; otherwise you decide to grow your own orchard from scratch because it’s easier than trying to get to the high-hanging fruit.

What’s worse is that, typically, the guys developing the process do their work, and, once the process moves into the fab for production, the knowledge gained during the development work more or less disappears. People move on, data is captured here or there on personal hard drives or home directories on servers, decisions are made by email and are lost through email retention policies. And so when, for example, someone wants to tweak the process a little bit to optimize yield, new work must be done when, in fact, that very same variable may have already been evaluated during development.

This is the scenario portrayed by Process Relations, a company focusing on managing the process of process experimentation. Their goal is to put some context-aware structure behind the volumes of data generated in process experiments so that information can be readily gleaned at any time and that new experiments can be avoided if possible.

The focus at present is on thin-film processing for both silicon IC and MEMS work. The overriding aim of the system can be over-simplified as follows: run as few actual experiments as possible. This is accomplished in two ways: get as much information as you can without running an experiment, and make sure the experiment is run correctly so that you don’t need to re-run portions of it due to some foul-up (including possible equipment damage).

Part of the philosophy behind eliminating unnecessary experiments hinges on the fact that failed experiments provide useful information. But we get hung up on the highly-charged word “fail” and don’t go through the extra effort to document such experiments. But, in fact, by creating an environment where everything can be stored more easily, all experiments can be recorded and their results called up later.

We’re talking lots of data, however. Each experiment will have a number of lots, each of which has a number of wafers, each of which has a number of dice, each of which may have lots of measurements and artifacts like SEM pictures. Not to mention emails discussing the reasoning behind how the experiment was designed and aspects of the results. All of this can be stored in a database where relationships are established so that, sometime in the future, someone can run a query looking for “… all of the wafers running this particular process step with these temperature values…” and the results across all experiments in the system can be mined for golden nuggets.

If it turns out you don’t have the information you need, then you can design an experiment. But before actually running real silicon, you can learn some things by emulating the experiment – that is, using modeling and empirical data to do a rough simulation of the experiment – or actually doing a TCAD run. These may help you get the information you need without running real silicon.

If you decide you actually need to run wafers, you can then validate the processing steps you’ve set up. Each of the machines used in your flow has a set of operating ranges and other restrictions – cleaning steps in advance, forbidden materials, etc. – that can be codified. By entering your specific experimental flow and validating it, you can ensure that none of what you want to do violates any of these restrictions. In addition, it can then print out run cards automatically, helping to eliminate the chance that someone manually copies the information onto the run card incorrectly, removing yet another potential source of ruined wafers or damaged equipment.

Thus armed, the experiment flow above might be changed to the following:

  1. Decide what information is needed.
  2. Check to see if the information already exists. If so, you’re done.
  3. Design experiment(s).
  4. Emulate or simulate the experiment. If that provides the information you need, you’re done.
  5. Check the process to make sure it’s valid.
  6. Run experiments. Don’t re-run experiments.
  7. Load results into database.
  8. View information.
  9. Draw conclusions.
  10. Rinse and repeat fewer times until ready for production.

Yes, that looks like more steps, but you’ve got a couple of critical early-escape points. And loading the info into the database not only makes data available for others to use later on, but should even make it easier to evaluate your own experiment right away.

So, if all of this works as promised, then, rather than surveying the smoking hulk of a deposition machine after some PhD intern screwed up the settings while being distracted by late-night browsings that were depositing questionable cache files in and amongst the soon-to-be experimental results that someone would then need latex gloves to explore long after the intern had skittered back home, you have a centralized, organized cache of results that can be queried at a high level, from good experiments and bad, and a way to help ensure that future experiments are necessary and go according to plan.

Link: Process Relations

Leave a Reply

featured blogs
Jan 19, 2021
If you know someone who has a birthday or anniversary or some other occasion coming up, you may consider presenting their present in a Prank-O gift box....
Jan 19, 2021
As promised, we'€™re back with some more of the big improvements that are part of the QIR2 update release of 17.4 (HotFix 013). This time, everything is specific to our Allegro ® Package Designer... [[ Click on the title to access the full blog on the Cadence Communit...
Jan 19, 2021
I'€™ve been reading year-end and upcoming year lists about the future trends affecting technology and electronics. Topics run the gamut from expanding technologies like 5G, AI, electric vehicles, and various realities (XR, VR, MR), to external pressures like increased gover...
Jan 14, 2021
Learn how electronic design automation (EDA) tools & silicon-proven IP enable today's most influential smart tech, including ADAS, 5G, IoT, and Cloud services. The post 5 Key Innovations that Are Making Everything Smarter appeared first on From Silicon To Software....

featured paper

Common Design Pitfalls When Designing With Hall 2D Sensors And How To Avoid Them

Sponsored by Texas Instruments

This article discusses three widespread application issues in industrial and automotive end equipment – rotary encoding, in-plane magnetic sensing, and safety-critical – that can be solved more efficiently using devices with new features and higher performance. We will discuss in which end products these applications can be found and also provide a comparison with our traditional digital Hall-effect sensors showing how the new releases complement our existing portfolio.

Click here to download the whitepaper

Featured Chalk Talk

SensorTile. Box - A Ready to Go IoT Node

Sponsored by Mouser Electronics and STMicroelectronics

In the highly competitive IoT market, getting your idea to the prototype stage as quickly as possible is critical. But, designing non-differentiated things like connectivity, power supplies, sensor interfaces, and so forth soaks up valuable design time. In this episode of Chalk Talk, Amelia Dalton chats with Thiago Reis from STMicroelectronics about SensorTile Box - a ready-to-go IoT node development kit that’s just waiting for your great IoT idea.

Click here for more information about STMicroelectronics STEVAL-MKSBOX1V1 SensorTile.box Development Kit