Apr 05, 2013

The Scribe and the Princess and the Pea

posted by Bryon Moyer

OK, perhaps “scribe line” is more accurate, but I do love a double entendre (even if not salacious). I had a discussion with KLA-Tencor at SPIE Litho recently regarding two new machines they’ve just announced. The first allows detection of defects through spectral analysis. The issue it faces is that it relies on test structures in the scribe line, which are facing two challenges: more of them are needed and there’s less space.

More test features are required both because of new structures like the FinFET and new processing steps, double-patterning in particular. But such structures have taken advantage of a generous scribe line area, dictated originally by the width or kerf of actual mechanical saws way back in the day. The cutting is done by laser now, so the kerf is no longer the issue. The scribe line is actually having a measurable impact on dice per wafer, so shrink it must.

The features that their SpectraShape 9000 analyzer looks for are periodic, and their spectra when illuminated by broadband light can be analyzed twelve ways from Sunday. Each of those features goes in a “box” that is currently 45 µm square. To accommodate the smaller scribe line, they’ve reduced the box size to 25 µm on a side (meaning they can almost put four of them where one of the old ones would have gone).

This has come with higher broadband light power, improved sensitivity, and higher throughput for more sampling.

Meanwhile, we’ve come to the point where the smallest (OK, maybe not smallest, but very small) particle – on the backside of the wafer – can push the upper surface out of the depth of field during exposure. Seriously. Total princess-and-pea situation. It gets worse because smaller particles tend to stick harder due to van der Waals forces. And yet such a particle may transfer to the chuck, sharing the donation with the next wafers to come through.

Rather than noticing the effect of such a particle and then going and figuring out where it is, they’ve created a new use model: inspect the backside.* Of each wafer, before it goes into a process. This prevents the particles from ever getting into the chamber – as long as it can be done quickly enough to keep the line moving.

They’ve boosted sensitivity on their BDR300 by 10x to allow for detection of half-micron defects at 100 wafers/hour. They also have a review capability, allowing inspection of defects down to 0.2 µm. It can be integrated into their CIRCL cluster.

You can find out more about these machines in their release.

 

 

*There’s so much potential for abusing this… especially when looking for defects like paddle marks… but this is a family newspaper. Oh, OK, who am I kidding…

Tags :    0 comments  
Apr 02, 2013

3D-IC Planning

posted by Bryon Moyer

During Cadence’s recent CDNlive event, I had a discussion with Kevin Rinebold to talk about 3D-IC planning and design. Actually, it’s more than that, covering all of the multi-die/package combinations like system-in-package (SiP), complex PC boards, and interposer-based solutions. The basic issue is that it’s becoming increasingly difficult to separate die design from board/package design; you may have to plan both together.

Said another way, what used to be board design duties have encroached on die design as packages have started to look more and more like micro-PCBs. The “lumpiness” of old-fashioned design is giving way to a more distributed approach as the “lumps” interact in non-lumpy ways.

Cadence’s approach splits the process in two: planning and implementation. Their focus during our discussion was the planning portion. Why split this part of the process out? Because it’s generally being done by the packaging people (“OSATs”), not the silicon people. So the OSATs will do high-level planning – akin to floorplanning on a die (and may actually involve floorplanning on a substrate).

They hand their results to the implementation folks via an abstract file and, possibly, some constraints to ensure that critical concerns will be properly addressed during design. The abstract file isn’t a view into a database; it is a one-off file, so if changes are made to the plan, new abstracts can (or should) be generated.

Cadence says the key to this is their OrbitIO tool, from their Sigrity group. It allows mechanical planning – things like ensuring that power and ground pins are located near their respective planes. They can also do some power IR drop analysis, although more complete electrical capabilities will come in the future.

There’s one other reason why the planning and implementation are done with completely different tools (mediated by the abstract file): OSATs tend to work on Windows machines, while designers tend to work on Linux machines. No, this is not an invitation to debate. (Oh, wait, Apple isn’t involved in this comparison… OK… never mind…)

Tags :    0 comments  
Apr 01, 2013

What’s Virtual Is Real

posted by Bryon Moyer

Google is known for engaging broadly across a range of seemingly unrelated projects. Whether glasses or driverless cars or even same-day delivery, it’s as if they need to be everywhere so that, when something happens, they’re a part of it.

But some of these different projects appear to be coming together a bit more with yet another service that’s about to roll out: Google Stuff. Unlike their prior efforts, however, which tend to virtualize real things, this offering comes full circle, applying much of what they’ve learned in the data world to everyday items.

The idea is that they are building a number of giant warehouses. When you sign up for the service, you get a real live human agent that comes to your house. They start by observing how you live, what you do, and, in particular, which things you use most frequently. Based on this, they will take almost all of your stuff and move it to their warehouse. They’ll leave a cache of your most commonly-needed items at your house; everything else goes.

When you need something, your agent will arrange for quick delivery (same-day for the Bay Area at this point) from the warehouse. If the agent notices changes in your lifestyle or major life events like a baby coming, then the cache may change; they’re working on algorithms to best gauge what you need at any time, with a goal of your not noticing that you don’t have all your stuff right there in the house.

When grocery shopping, for instance, you’ll do exactly what you’ve been doing, except that, at checkout, you won’t take the groceries to your car: rather than your having to haul the stuff a half mile to your house, your agent will have it shipped to the warehouse a mere thousand miles away, and then the specific things you need will be delivered when you need them based on a “metering out” algorithm determined by your past practices.

In the pilot cases, this has worked pretty well except in situations where people are, for example, buying extra food and drink for a party. The algorithm can’t really tell this, so it just assumes you’re stocking up and continues metering at the normal rate. They’re trying to improve this based on a few party disasters where none of the food or drink showed up.

It gets a bit silly if you buy something for immediate use, like a sweatshirt. You still need to ship it to the warehouse, from which it will immediately shipped back. There doesn’t seem to be any way to avoid that extra round trip.

The touted benefit of this is that we can all have much smaller houses. We will no longer need space for all of our stuff; Google will have it. We can also get much better fire insurance rates because our stuff won’t get burned up in a house fire; it’s protected in the warehouse.

There are some obvious concerns, like what happens if there’s a problem accessing the warehouse or if your agent gets sick. At present, unless you have a complete connection to the warehouse, you simply can’t get your stuff (unless what you want is fortuitously cached locally).

It also sounds very labor-intensive, what with all the agents and truck drivers moving stuff around. But that aspect of the model is temporary. The driverless vehicle project is partly designed to remove the humans from delivery, using autonomous trucks. And the agents themselves are simply prototypes for the robotic agents that will ultimately be able to handle the job more reliably without, for example, getting sick and threatening your access to your stuff.

And what does Google get out of this? There have also been some rumors amongst the pilot-liners that Google actually makes money on the side by loaning out your stuff, particularly the things you don’t use often. There’s even talk of a secret “pack-rat” algorithm that can figure out what stuff exists merely for the thrill of acquisition, meaning the owner will never ask for it again; such stuff may be sold off.

It’s hard to tell whether these tales are true, since there is no user-accessible accounting of transactions, so if something goes into the warehouse, you can tell that it’s there, but if it disappears, then there is no remaining evidence that it was ever in there. It’s assumed that there are internal logs; you may not be able to get a record of it going in there, but law enforcement can (creating an issue for drug stashes that suddenly end up in the warehouse). Google has declined comment on these rumors.

Bu the big win for Google is a separate derivative product, Google Lifestyles, a service made available to advertisers and vendors, who will receive a daily report of everything you do and all transfers to and from your warehouse section. This is eagerly anticipated by companies who will finally be able to see who uses their competitors or who doesn’t even use their products at all. This is expected to be the biggest revenue generator for Google from this entire project.

There’s always the question of user control, but like most things Google, you hand control to them. You can’t partially use the service; once you’re signed up, you can’t touch any of your stuff. They completely take over. Of course, there’s no live help or customer service (can you imagine having to staff that up for something this big?). So it has that classic air of Google mystery, which most people deal with simply by resigning themselves to the fact that they’re no longer in charge of their own stuff and go back to watching World’s Funniest Videos.

Rollout of Google Stuff will happen in phases over the next six months.

Tags :    1 comment  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register