posted by Bryon Moyer
I recall the few times I was able, for some reason, to take advantage of noise-cancelling headphones on an airplane. Once on your ears, when you turned them on, you gradually heard the background hiss of the airplane disappear. It took a few seconds for this to happen.
My assumption was that this was a slow integration problem, and that only long-term constant sounds could be cancelled out; the circuitry simply wasn’t fast enough to eliminate short, sharp sounds. (Which is probably good, since you certainly wouldn’t want it to cancel out important flight attendant messages, like the fact that you can get a great deal on a credit card or that duty free is now available).
This means, of course, that such headphones wouldn’t solve the “cocktail party” problem: isolating one voice and dimming the others, something our ears and brain somehow manage effortlessly.
Solving that would be particularly nice on our phones; as Cirrus Audio points out, if you use the phone in a bar, all voices go through, not just yours.
Of course, headphones on an airplane don’t include a microphone. With a phone, even if you had super-fast algorithms that could cancel short, bursty noises, you’d need to avoid cancelling out the person speaking into the phone. That would kind of defeat the purpose.
Cirrus recently announced some new devices dedicated to improving phone sound, and noise reduction and cancellation are part of it. Phones are moving to multiple microphones to figure out which sounds to suppress, but the audio guys have a challenge in that they don’t get much influence over where those microphones go. So Cirrus is trying to be as adaptive as possible.
They claim that most other audio chips are pre-optimized and fixed, while, by contrast, they dynamically adjust their noise reduction/cancellation to adapt both to the phone and the specific sound environment. And their noise reduction applies to both ends of the conversation – the voice at the phone and the voice at the other end of the line. (And if you think that solving the noise at one end makes solving it at the other unnecessary, you haven’t listened to cell phones much. Although apparently cellular systems are moving to wideband voice so that whe_ _he voi_ isn’t drop_g out, it wi_ sound _reat.)
The other bit that caught my ear was their approach to voice recognition and control. This gets to the always-on problem: if your phone is going to be voice activated, then you want that to work without your having to turn the phone on first. If the phone goes completely to sleep, then this won’t work.
But having the phone on all the time kills the battery. So Cirrus has a three-step wake-up routine. A low-power block listens to determine if there’s a significant sound. If so, it wakes the next block, which determines whether or not the sound is noise or a voice. If it’s a voice, then the third step wakes up, which does two things in parallel: decodes the command and decides whether it’s the authorized voice. If it’s not an authorized voice, then the phone automatically responds, “You’re not the boss of me!” and goes back to sleep with a righteous pout.
OK, maybe not quite like that… that might be a cool feature, though, in case you product planning guys are listening…
Anyway, you can find more details in their announcement.
posted by Dick Selwood
Later this week I will be reporting on the embedded world conference, where the Internet of Things was the major topic. Just before embedded world was Mobile World Congress, which has become as big a circus as the Consumer Electronics show. There again the Internet of Things was a huge topic.
Today CeBIT opens in Hannover. Once just a specialist computing exhibition and conference, spinning off from the massive industrial exhibition of the Hannover Fair, it too has become enormous. And last night at the opening of the fair by Chancellor Merkel of Germany, the British Prime Minister, David Cameron, announced that the British Government is going to invest £73 million (around $120 million) in research in areas linked to the Internet of Things.
Perhaps not entirely by coincidence, several of the heavy weight British Sunday papers devoted several pages to explaining the Internet of Things to their readers. With all this hype it has the appearance of being another tech bubble. But it would be wrong to dismiss it as that. Whatever the public gesturing, interconnectivity, remote access to monitor and control domestic appliances, and all the other things that are pouring into the Internet of Things soup, these form a trend that is not going to be reversed. The job for engineers is surely to make sure that as these things come together they are secure, safe and reliable
posted by Bryon Moyer
The focus of the directed self-assembly (DSA) discussion at SPIE Advanced Litho has changed. In past years, it has felt more like the efforts were largely about corralling this interesting new wild thing, or even seeing if it could be corralled.
Well, this year it felt more like it’s in the corral, but there’s lots more training to do to make it a well-behaved showhorse. The focus is now on manufacturability. What are the tweaks and changes needed to turn this into a reliable, predictable process?
We’ve covered the basics of DSA before, but part of its distinctive character is in the sensitivity of the self-assembly process to subtle effects. Figuring out what matters and what doesn’t – and how it might be made more robust – is part of what’s going on now.
The biggest topic is defectivity. The desired pattern will be consistent rows or dots; the opposite of that is the fingerprint nature of a randomly ordered pattern. The defects you might see now are such that you might mostly have, say, parallel lines, but occasionally you’ll have one line meandering over to another, or some such hint of the latent fingerprint. Because these defects are different from those you might be used to with more traditional lithography techniques, work is still needed to characterize and measure specific defectivity modes.
You may recall that there are two versions of DSA: chemo-epitaxy and grapho-epitaxy. The former embeds a guide pattern underneath (kind of like damascene-style) where the block copolymers (BCPs) will go; that pattern has a chemical affinity for one of the two polymers, thereby guiding the final pattern. The latter sets up guides in the same plane as the BCP film; it becomes a physical rather than chemical guide. (I saw these guiding lines, typically simply called “guides,” referred to as “weirs” in one presentation).
One trend I noticed was that several presenters saw grapho-epitaxy being preferred for contacts and holes (often abbreviated simply as C/H), while chemo-epitaxy would be preferred for lines and spaces (L/S). One possible reason for that would be that the grapho-epitaxy guides take up space; no lines can go where they are, whereas no such space is lost with chemo-epitaxy.
There was, however, one presentation where grapho-epitaxy was used for L/S, and they over-etched the guides to make them the same width as the final intended lines. In that way, rather than getting in the way of the lines, they actually became lines.
Another trend with grapho-epitaxy is to “brush” various surfaces to further bias the self-assembly. This brushing involves a light coating of a material with an affinity for one of the two BCPs. (It’s kind of a blending of chemo- and grapho- concepts.) Further refinement is such that the sides of the guides would remain brushed, while the bottom surface would be rinsed clear.
This is all about “wetting,” a common thread in a number of presentations. It was not unusual for defects to involve a failure of the BCP material to coat all the surfaces properly; you might end up with voids. This becomes difficult to inspect, since it’s a so-called “3D” defect. In the case of a contact hole, for example, the hole might look great at the surface, but might not have cleared properly at the bottom, and this wouldn’t be apparent from a standard inspection. Better wetting helps this dramatically.
Less intuitive is how the BCPs react to such brushing. My assumption would have been that a material with affinity for, say, polystyrene (PS) – one of the components of the most common BCP, linked with PMMA (polymethyl methacrylate) – would cause the PS to position itself alongside that brushed surface, with the PMMA distancing itself from it. But one presentation seemed to indicate the opposite. I talked to the presenter, and he indicated that with a PS-affinity brush, it would actually be the PMMA that would position itself there. Doesn’t quite match my intuitive sense of “affinity,” but then again, this isn’t an area where I trust my intuition.
Also under investigation are different BCPs – in particular, so-called high-Χ materials (that’s the Greek “chi” – it rhymes in American). Χ, as far as I can tell, is a measure of the energy difference between the two BCPs. The higher that difference, the more the two materials repel each other. Presumably that makes the self-assembly happen, oh, how to say… with a greater sense of purpose – less wishy-washiness.
But it can take a lot of time to experiment with various random combinations of materials. As an alternative, folks are investigating the mixing of other materials into the more common BCPs that have already been studied. This lets them tune the period, which, in some cases, can be predicted linearly with the weight proportion of the additive. It also allows for thicker layers of the BCP film, which helps the manufacturability. Indications were that the process window isn’t compromised by these additives.
The chemical mixtures also affect the processing time. Looked at simplistically, you lay down a uniform mixture of the BCP material and then bake, or anneal, it. During that bake, the molecules diffuse through each other to separate out. That rate of diffusion determines how long a bake is needed, which determines throughput. One paper had reduced a 30-minute bake to 2 minutes. Once the bake is complete and the temperature lowered, the resulting pattern is “frozen” in place.
Presumably, during the bake, you’ve got an initial surge of diffusion as the self-assembly proceeds, which would slow down as the process completes. Timing the bake is critical, since if it’s too short, there will still be lots of molecules that might not have found their final resting place. This would likely vary considerably from lot to lot, so the bake has to be long enough to get past that with plenty of margin for repeatability. Playing with the diffusivity of the materials helps to tune – and hopefully minimize – this bake time.
It’s also important to note that much of the work has been done die-by-die. How uniform these processes will be over an entire 300-mm wafer is still an open question, and it’s the focus of further work.
As with other novel lithography techniques, resist and line-edge roughness are important; we’ll talk about those in a future post. In addition, DSA also has some interesting implications for EDA; stay tuned for more on that.
I’d refer you to further materials, but this show works differently in that proceedings aren’t available until weeks after the event. So all I could take away from it were my notes. Which I won’t refer you to since there’s no way you could read my writing.