Mar 24, 2015

Synopsys and Leading-Edge Litho

posted by Bryon Moyer

While wandering the halls of SPIE Advanced Litho, I had a conversation with Synopsys’s Tom Ferry about their focus for the leading edge of lithography.

He addressed several areas, many of which reflect progress on existing notions. Compact models are getting less… well, compact as compared to so-called “rigorous” models. Given the number of effects to be covered, Synopsys is moving to rigorous models to improve predictability.

They’re also making progress on DSA support as well as reducing mask write times. The topic that’s a bit different, however, is inverse lithography technology (ILT). While not new, it’s always been too compute-intensive (read “expensive”) for commercial use. That’s changing both as its value grows at aggressive nodes and as computing capabilities improve.

Some background: for years, now, we’ve been adorning our masks with little features (called “assist features”, or AF) that will never print. Which is good, because we don’t want them on the actual mask. What they do is monkey with the light coming through other features of the mask to make the printed versions of those other features sharper or less distorted by the craziness that light undergoes with these tiny dimensions.

This is the domain of optical proximity correction, or OPC. Algorithms have typically been empirical: sophisticated heuristics based on what we know works and doesn’t work are built into algorithms that scour a mask for opportunities to improve fidelity.

But there’s been an, until now, academic notion floating around for a while. The process of exposing a wafer converts the mask pattern to a wafer pattern. You can think of this as a mathematical function – I’ll call it E (for Exposure). If the mask pattern is M and the resulting wafer pattern is W, then W=E(M).

The idea is to start with the ideal W and work backwards find the corresponding ideal M – which amounts to finding the inverse function of E (call it E-1). So then M= E-1(W). It’s kind of like pre-equalization.

There are a couple practical problems, however. There are some parts of the E function (which includes exposure dose, focus, photoresist, development, washing, etc., etc.) that may be well understood from a practical standpoint, but not to the point of being able to derive a mathematical function for them.

But here’s the other thing: according to a paper from Luminescent Technologies in 2006, (a company since acquired by – I’ll bet you already guessed it – Synopsys in 2012), there is no one unique inverse of E. From a practical standpoint, that’s saying that there are different ways to decorate a mask with these unprintable features and essentially get the same result. And some solutions may be easier or harder to manufacture.

So that sort of nukes the idea of a single E-1. Instead, you can look at the difference between a given wafer pattern W and the ideal desired wafer pattern – let’s call it W+. They turn this into an optimization problem where they minimize (W+-W) while also taking into account various other costs – like ease of manufacturing.

Critically, this is a “pixel-based” solver approach rather than an edge- or feature-based approach. This apparently widens the possible solution space, allowing for results that might not be at all obvious or intuitive – even to an experienced litho dude.

In addition, it turns out the problem can be fractured so that distributed computing can make the solution tenable. That’s not to say, however, that it’s easy – it’s still a bear to do. So they’re targeting only the most challenging geometries with this, integrating it into a general OPC flow, where the old methods are still used where possible.

Synopsys’s approach is to

  • Use ILT before OPC for optimizing rule-based assist features (RBAF)
  • Use ILT at the cell level where OPC has convergence problems
  • Use ILT to address any hotspots after OPC.

ILT_Figure.png 

You’ll see this at the 10-nm and below nodes; Synopsys has this close to being ready for prime time. (I know, you might think that 10 nm is a long way from prime time, but the folks developing these nodes need to be able to create masks…)

Tags :    0 comments  
Mar 23, 2015

Multicore Microcontrollers for IoT and audio

posted by Dick Selwood

XMOS has, from its base in Bristol, England, been quietly building up its business of creating a new force in the embedded market. The company has been shipping the xCore multicore microcontrollers into a wide range of companies around the world, and has built a particularly strong position in audio. Last summer it was announced that the company had raised £26.2 million from Robert Bosch Venture Capital, Huawei Technologies, and Xilinx.

Now it is announcing a new generation of  xCORE - xCORE 200, and a product specifically for the high resolution audio market – xCORE-Audio

The xCORE -200 is targeted at the Internet of Things, with Gigabit Ethernet joining USB 2.0 and high performance general IO, improved performance (2000 MIPS in the launch device), and increased memory. A new development , upgraded tool suite and improved libraries. Code written for earlier xCORE products will run on the xCORE-200. Deterministic real-time operations make the family suitable for a wide range of data acquisition, networking, HMI and other applications.

The success that XMOS has in audio provided the impetus for two  families optimised for this field. The xCORE-Audio/ Hi-Res is for consumer audio and video, including stero high resolution headphone amplifiers and the xCORE-Audio/Live addresses prosumer and professional audio, such ads DJ kits, mixing, and conferencing. They both work with a range of operating systems, support a variety of audio formats and interfaces, including USB type C.

DSP support in the xCORE-Audio/ Hi-Res is aimed at applications like surround sound and karaoke while that in the xCORE-Audio/Live is aimed at audio mixing and post-processing pipelines. And the company points out that pricing starts at less than $2.00 in high volume.

An in-depth report on this will follow.

Tags :    0 comments  
Mar 19, 2015

Microsemi Moves GNSS Indoors

posted by Bryon Moyer

Much of the cellular build-out in areas that already have coverage is happening through small cells. It’s like we’ve gotten the broad brush strokes in place; now we’re fine-tuning coverage and capacity here and there as needed.

And much of this is happening in buildings – malls, office buildings, and other areas where large numbers of people concentrate.

Which creates a problem: these cells rely on accurate timing from GPS (or GNSS, generically). And, as we’ve seen in our discussions of indoor navigation, GPS isn’t a thing indoors. At least, not for your average receiver.

So what happens is, well, exactly what you’d expect: you put an antenna on the building to receive the GPS signal. That involves getting power up there and then distributing the received signal via coax.

That might not seem like much of a burden for those of you accustomed to setting up a TV satellite dish for your home. But, apparently, this is a bigger deal with big buildings. Running those bulky, shielded wires around isn’t trivial. And, apparently, the operator may even have to rent the space on the roof where the antenna goes. Oi, everyone with their hand out!

So Microsemi has come up with an alternative. They call it an integrated GNSS master – IGM. It will provide the master timing signal for the small cells installed in the building. It’s designed to be installed indoors.

“But there is no GPS signal indoors,” you might reasonably protest. Well, apparently there is – it’s just not a strong signal. (OK, I’m sure you can find places where the signal is pretty much gone. So… yeah, the Panic Room is probably not a good place to mount this. Although… read on…) How do they capture this signal?

First, they have a very sensitive receiver. They also take advantage of assisted GNSS (A-GNSS). That covers a broad range of alternative ways of receiving GNSS signals. Some are sent by Ethernet; some are pre-calculated and sent ahead of time; etc. Together, through what we might call “signal fusion” (by analogy with sensor fusion) with whatever live GPS signal it can detect, these allow the IGM to function indoors. It also improves the time-to-first-fix.

“But you still have to route power and signals,” you might continue to protest. Well, yes and no. There’s no clunky coax: it’s Ethernet. And the unit leverages Power over Ethernet (PoE). So once you’ve plugged the Ethernet cable in, you’re good to go. Much easier to wire; no conduit or high voltages to muck about with.

Microsemi_Integrated_GNSS_Master-IGM-Diagram_cr.jpg 

(Image courtesy Microsemi)

Thinking ahead, could this be leveraged for indoor navigation? That’s not Microsemi’s immediate plan, but they say that, in principle, it could.

You can read more in their announcement.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register