Jun 12, 2014

Sensor or Switch?

posted by Bryon Moyer

Honeywell recently released a new AMR (anisotropic magneto-resistive) sensor. We looked at this basic technology some time back, but there was another aspect of the release that confused me: the sensor was compared to a reed switch. And, at first glance, I don’t see a switch (=actuator) and a sensor as being the same thing.

For those of you steeped in this technology, what follows may seem rather basic and even obvious. But if you’re new to the space, then there’s some room to untangle some concepts that can be easily conflated.

Part of the issue has to do with being precise with terms that might be confused. If I think sloppily, I end up confusing a reed switch with a reed relay. What’s the difference? Well, a reed switch is simply a two-lead component. The switch connects the leads, presumably completing some circuit. That switch is actuated by a magnetic field (either to open or close it). That field is applied externally; exactly how depends on the application. Critically, there’s no magnetic component built into the switch.

So, in a way, the reed switch is a magnetic field detector. When the field exceeds a threshold, the reed moves, and you can think of this as a crude digital magnetic field sensor.

Now, if you include a magnetic coil along with the reed switch, adding two new leads, now you have a reed relay. This is much more of an actuator than a sensor, since it creates its own magnetic field. So switch and relay confusion can create sensor and actuator confusion.

Now let’s look at the AMR sensor schematic from the data sheet. From the outside, it may look just like a Hall Effect sensor, another sensor based on magnetic phenomena. (The field directions are apparently different, but I won’t dwell on that.)

Figure.png

 

On the left is the detector circuit. Because this constantly draws power, it must do so exceedingly sparingly. The original application for this (more on that in a moment) required no more than 500 nA; Honeywell has a couple of devices, one at 310 nA, the other at 360 nA. They claim this to be more than an order of magnitude more miserly than the lowest-power Hall Effect device, with greater sensitivity.

Once it detects the field, it flips the flop and the output value changes. Now… this output looks something like a beefy CMOS output, not like a wire in a reed switch. And if it drives a CMOS input, then this will simply look like a digital indicator with no DC load current. But if the output drives something that pulls current, then the pull-up (or the pull-down) acts as a switch that makes or breaks that circuit. In this way it more resembles a reed switch.

Here’s one other possible source of significant confusion: this is not like the magnetometer you may have in your phone. Your phone mag, like most sensors, provides continuous readings of the ambient magnetic environment. The phone can go in and interrogate the value at any time. By contrast, this AMR sensor is digital: either on or off. You can’t go in and measure the actual field. So it’s unlike many other sensors out there. That on/off characteristic is what makes it appear to be a switch – and contributes to the sensor/switch confusion.

So if you think of a reed switch as a switch that can be used as a sensor, then here you have a mag sensor that can be used as a switch.

By the way, that application I alluded to above? Apparently people were trying to monkey with electric meters using magnets to disrupt the metering. So AMR sensors (it takes two of them) are used to detect such anomalous magnets. Obviously, being in a meter, they have access to power, but it’s the power someone else is paying for, so it has to be tiny so as to be undetectable on their bill.

You can read more about Honeywell’s part in their release.

Tags :    0 comments  
Jun 10, 2014

Accelerometer Fingerprints

posted by Bryon Moyer

An interesting paper was published earlier this year by a team from University of Illinois at Urbana-Champaign, University of South Carolina, and Zhejiang University. In short, it says that the accelerometer in your phone could give you away even if you’ve locked all your privacy settings down tight.

The idea is based on the fact that each accelerometer is unique at the lowest level, having minor but detectable differences in waveform or harmonic content. To the extent that the characteristic resonance of an accelerometer can identify it uniquely (or nearly so), it acts as a signature.

This means that an app can “record” a phone’s accelerometer and then store it in the cloud for future reference. Some other app can also sample the accelerometer and send the sample to the Cloud, where a search engine can match the signature and identify the phone. (This is the way music is identified these days, so there is clearly precedent that the search aspect is doable.)

“Unique” may actually be an overstatement from a purely scientific standpoint. As they point out, they haven’t done enough of a statistical sample to prove uniqueness over the many millions of phones out there, and they don’t have some theoretical model to suggest uniqueness. But they measured 36 different time- and frequency-domain features in 80 accelerometers, 25 phones, and 2 tablets and came away pretty convinced that there is something to pay attention to here.

They discuss the possibility of “scrubbing” the measurements by adding white noise or filtering, but each of the things they tried was either ineffective or too effective (that is, it affected how an application operated).

To me, it seems like there’s an abstraction problem here. A phone has a raw accelerometer followed by a conditioning circuit and a digitizer. Eventually a value is placed in a register for retrieval by an application. In a perfect world, all distortions and anomalies would be “filtered” out by the conditioning and the digitization so that what lands in the register has been purged of errors – making all accelerometers look alike. That’s a pretty high bar to set, but you’d think that, even if not perfect, it would at least get rid of enough noise to make a uniqueness determination infeasible.

Then again, as they point out, (a) it took 36 features to get uniqueness, and (b) if you couldn’t quite get there using just the accelerometer, you could also bring the gyro (et al) into the picture – effectively adding more features to the signature. So any policy of “cleanup” prior to registering the final value would have to be applicable (and actually applied) strategically across a number of sensors. In other words, some fortuitous solution related to how accelerometers are built would be insufficient, since it couldn’t be used on a gyro as well.

The only other obvious solution would be policy-based. You could restrict low-level access, but that would rule out apps needing high precision readings. The OS could flag apps that need low-level access and ask permission, although presenting that request to a non-technical phone user could be a challenge. And the OS would have to actually check the program code to see if it does low-level access; relying on declarations wouldn’t work since the concern here is specifically about sneakware, whose authors are not likely to volunteer what they’re about.

I’m curious about your thoughts on this. Are there other solutions? Is this much ado about nothing? You can read much more detail in the original paper, and then share your reactions.

Tags :    0 comments  
May 28, 2014

KLA-Tencor’s New Reticle Inspector

posted by Bryon Moyer

Seems like no aspect of IC design and production escapes the need for All Things to Get Harder and Harder, requiring ever-better solutions. Today we look at reticle inspection, and, in particular, at KLA-Tencor efforts to adapt their Teron system, originally intended for mask shop use, to the needs of production fabs. The idea is that, when new reticles come into the fab, they need to be inspected as a basic QC step. And, after 300-600 or so uses, they need to be re-qualified to make sure that acquired defects aren’t reducing die yield.

One practical consideration is floorspace. The volume of reticles is increasing due, for example, to multiple patterning, which multiplies the number of reticles for some layers. 14-nm flows literally double the number of reticles as compared to 20 nm. No one wants to add more machines to handle the extra load; fab managers would rather increase the processing capabilities of the “space” currently allocated to inspection, placing an extra burden on the equipment.

So what kinds of defects are the inspection systems looking for? There are several, but haze seems to be a big one. Haze represents the slow deposition of chemicals – presumably from various other processing steps – onto the reticle. Obviously the best solution is to eliminate the sources of the haze, and progress has been made on that, but some remains – and, of course, it’s now harder to detect.

For one thing, it used to predominate in open spaces, where it’s easier to pick out. Now it tends to collect along the sides of features, making it harder to see. Also, because there’s less haze, you’re looking for smaller, more isolated defects than before, when a cloud-like collection would be more evident. The presence of optical proximity correction (OPC) features makes this harder, since they can be hard to distinguish from defects.

Other things to look for include evidence of the chrome, which makes up the actual pattern, “migrating” – narrowing or flattening after cleaning, as well as simple “fall-on” defects that won’t fall off.

So how do you go about finding these things? There are a number of techniques, some of which work and some of which no longer do. In the end, a combination works best.

  • Simple optical inspection can be used, but it has to be “actinic” – that is, use the same wavelength of light that will be used during wafer patterning: 193 nm.
  • For repeating patterns, it used to be helpful to compare neighboring versions of the same feature. But that is less useful today because, even though the original layout of each cell may be the same, the OPC features may be different, so the cells are no longer identical on the reticle.
  • Production reticles often have more than one die instance, so it can be useful to compare neighboring dice on the reticle. But for leading-edge processes, single-die reticles are more common – as are shuttle wafer reticles, which have multiple dissimilar dice that can’t be compared. So this technique isn’t so useful anymore.
  • Modeling can help. KLA-Tencor generates models offline and uses them in real time to compare to what’s actually being seen.
  • KLA-Tencor also uses a technique that they consider to be one of their differentiating strengths: a “difference image.” They capture images of how light is transmitted and reflected through the reticle. From each of those, they calculate what the other ought to look like. So, for instance, from the reflected image, they calculate what the transmitted image would be in the absence of any defects. And vice versa. They can then subtract the calculated versions from the observed versions – calculated transmitted vs. observed transmitted, and likewise for reflected – and use the differences to pinpoint defects. This is a compute-intensive operation that places a heavy load on the inspection equipment.

The processing power they’ve built into their just-announced Teron SL650 is intended to handle the inspection complexity with a high signal-to-noise ratio while still accommodating the increased number of reticles it needs to handle.

Figure_cr-red.jpg

(Image courtesy KLA-Tencor)

You can find more on the new system in their announcement.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register