posted by Bryon Moyer
We’ve covered a lot about sensors here before, and in the huge majority of the cases, a sensor consists of a MEMS (or other) sensing element, an ASIC to clean up and digitize the signal, and then a series of registers where all the relevant data gets placed.
An outside entity, like a sensor hub, can then read those registers over a bus connection – typically I2C or SPI. What could be simpler?
Well, I guess an analog output could be simpler: you eliminate all of that messy digital stuff. But it seems to me that, running an analog signal halfway across town to get it to the analog inputs of a microcontroller (aka MCU, or whatever hub is used) would run the risk of seriously degrading the analog value in a way that wouldn’t happen with a digital signal.
(Click to enlarge)
Image courtesy Freescale.
I asked Freescale about this, and they justify it based on the wide variety of digital interfaces in use, in particular in industrial settings. Heck, they say that even CAN bus is leaving the confines of vehicles and moving into other applications.
Freescale makes lots of microcontrollers. This variety of MCUs partly reflects the diversity of interfaces they may talk to: Rather than having one large unit with all possible interfaces, they offer different devices. And yes, they’re assuming (or at least hoping) that you’ll be using their MCU.
So the idea goes thusly: first off, you simply don’t run the analog signals halfway across town. In these applications, an MCU is likely to be right nearby. (If not, then you want to move it so that it is nearby.) The MCU you choose will then reflect whatever bus you’re using, and that’s where you go digital. They prefer this, obviously, to having to have a bunch of different versions of the sensor to suit the various digital protocols.
There’s one other convenient thing about digital registers, however: they’re good at storing values while the rest of the system goes to sleep for a while to reduce power. Well, apparently these analog outputs can manage the same trick. The internal electronics shut down between samples, but the output is held between samples. This decouples the rate at which the MCU samples the analog outputs from the rate at which the sensor samples the system and allows power as low as 200 µA when running.
That’s how they see it; if you see it differently, then your comments are encouraged below.
posted by Bryon Moyer
Heat has got to be one of the most annoying side-effects of doing useful electrical work. The more work we do, the more things heat up, changing the characteristics of the circuitry and, if we’re not careful, leading to early end-of-life or outright failure.
It’s heat that’s part of why we’ve gone to multicore instead of simply ratcheting up microprocessor clock frequencies forever. Greater dissipation is one reason we end up with power transistors that are larger than they need to be for electrical reasons. And when 3D ICs were first trotted out as an idea some years back, one of the immediate questions was how heat would be removed from the center of the stack.
We do lots of things to mitigate heat: elaborate cooling systems, heat spreaders in packages, and modified silicon designs to reduce thermal density. All of which add cost in one way or another.
Well, for one application, a different solution has been proposed. Gallium nitride (GaN) is a wide-bandgap material used for high-electron-mobility transistors (HEMTs) in high-power RF applications – radar, cellular base station radios, satellite radios, and the like. The GaN typically sits over a silicon substrate, with a transition layer to ease stresses due to mismatches in the crystal lattice spacing of the two materials.
These circuits have localized hot spots that have to be carefully managed (with heat flux that Element Six says rivals that of the sun). Metal is typically used to wick away heat, and we all know that copper is a good conductor of heat, topping out at about 400 W/mK. But we have looked at one material that is a far better heat conductor than copper: diamond. Diamond can conduct heat in the range of 1000-2000 W/mK.
Unlike copper, which uses electrons to conduct the heat away, diamond does so through vibrations of the crystal lattice – so-called phonons (a fictitious particle for analysis of crystal vibrations and their properties and propagation). So higher-quality crystals will spread heat better than high-defect crystals or polycrystalline depositions.
Element Six does sell diamond heat spreaders that can be included under standard GaN/silicon or GaN/SiC (silicon carbide) circuits, and they’ll help, but they place the diamond material some hundreds of microns away from the transistor gate, where the heat originates.
A better solution, they say, is to have a transistor consisting of GaN on a diamond substrate rather than a silicon substrate. The standard transition layer between silicon and GaN is also a barrier to a conductive path from gate to substrate, so they’ve eliminated that as well, replacing it with their own “secret sauce” of a transition layer.
By doing this, you’ve now got the transistor gate about 1 micron away, roughly tripling the heat dissipation.
Upper image courtesy Element Six; graph credit Professor Martin Kuball, Bristol University
Their actual production process leverages GaN/Si layers already in production. They put a handle wafer on top, flip them over, remove the silicon substrate and the transition layer, and then add their own transition layer and grow a polycrystalline diamond substrate. That substrate is strong, but it’s not thick enough for fab handling, so they temporarily affix another diamond wafer, which is eventually removed and re-used up to 10 times. (They’re working on a cheaper handle wafer solution for this last bit.)
GaN on Diamond allowed Triquint and Raytheon to achieve a three-fold improvement in power density as compared to GaN/SiC, allowing them to meet a challenge set by DARPA.
You can read more about the Raytheon achievement in their announcement.
posted by Bryon Moyer
Yesterday we looked at number of different ways of inspecting wafers. Such inspections can be an important part of a process that turns out high yields of high-quality chips. They serve a couple of roles in this regard.
The most obvious is that you catch faulty material early. If rework is possible, you can then rework it; if not, well, you don’t throw good processing money after bad.
But the other reason is probably more important: by looking at wafers at various monitoring points, you get a sense of how the equipment is working. The wafer results act as a proxy for machine monitoring.
So… what if you could measure the machine directly?
That’s what CyberOptics is doing using an in-situ approach that they say is complementary to wafer inspection. They create “fake” wafers outfitted with sensors and feed them into the equipment. The equipment thinks they’re normal wafers and processes them; the sensors measure selected aspects of the setup and report back wirelessly in real time.
And they claim to be the only ones that have this real-time capability. They say other approaches require manual “timestamping” of data that’s downloaded and analyzed after the processing is over. The Bluetooth connection to a nearby rolling host computer allows the data to be transmitted as its captured.
They have setups for measuring air particles; for leveling; gap measurement (used with thin-film deposition, sputtering, etc.); vibration measurement; and a “teaching” system that improves alignment.
Most recently they’ve announced new air particulate measurement platforms: a reticle version, which replaces not the wafer but the reticle in a lithography tool, and a smaller wafer version – 150 mm (6”, roughly). That last one might seem odd, since they say they’ve already got a 450-mm version, and bigger ones usually come later. But in this case, they had to reduce the size of the sensing and electronics to fit the smaller form factor.
Images courtesy CyberOptics
You can read more in their announcement.