Nov 11, 2013

Going Expensive to Reduce Interposer Cost

posted by Bryon Moyer

Imec has been  working 2,5D IC issues with a particular focus on optimizing costs and, in particular, test yields. Yields can take what might have been straightforward-looking cost numbers and make things not so clear.

In their work on interposers, Eric Beyne took a look at three different ways of routing the signals from a wide-I/O memory. These puppies have lots of connections – like, 1200 per chip. He explored three different ways of implementing the interposer to find out which had the best cost outlook. The idea was to connect two such interfaces, with four banks of 128 I/Os each. Each channel had 6 rows of 50 microbumps. Microbump pitch along a row was 40 µm; along a column it was 50 µm. The two simply needed to talk to each other on the interposer.

The cheapest, most traditional approach is to use PCB (or PWB) technology. An aggressive version would have 20-µm pitch and 15-µm vias. This approach resulted in an 8-layer board; you can see the layout below – lots of routing all over the place. Wire lengths were, on average, 180% of the die spacing.

 

Laminate.png

 

Next was a semi-additive copper process – more aggressive dimensions and more expensive. Line pitch was 10 µm; vias were 7 µm. the tighter routing allowed connectivity with only 4 layers, and the average wire length was 166% of the die spacing. You can see the slightly less colorful result below.

 

RDL.png

 

Finally, they took an expensive approach: damascene metal lines. Moving from the PCB fab to the silicon fab. But this got them down to 2-µm pitch with 1-µm vias, and that was enough to run wires straight across on 2 layers with no extra routing. In other words, wire lengths were equal to the die spacing. You can see this on the following picture.

 

Damascene.png

 

So what happens to the overall cost? The last one is nice, but expensive to build. And here is where yield comes in. Because the “most expensive” option uses only two layers, it has the best yield. And that yield more than compensates for the expensive processing, yielding the cheapest option.

 

They didn’t give out specific cost numbers (they typically reserve those for their participants), but the net result is that they believe the damascene approach to be the most effective.

 

 

 

Images courtesy Imec.

Tags :    0 comments  
Nov 05, 2013

A Touch Technology Update

posted by Bryon Moyer

This year’s Interactive Technology Summit took the place of what has been the Touch Gesture Motion conference of the last two years. It was expanded to four days instead of two, and it was moved to downtown San Jose from last year’s Austin retreat (a posh golf resort a $30 cab ride from anything except golf). The content also took on displays in general as a topic to which the last day was dedicated.

So, with a broader brief, it braved the vagaries of being located where all the engineers are. In other words, where engineers can easily attend, but on their way to the conference, they can quick stop off at the office to take care of one little thing, and then this call comes in and then that email arrives and… well… maybe they won’t make it after all. In other words, attendance did seem a bit sparse. (And I’m speculating on the reasons.)

I had to pick and choose what I checked out, given that the TSensors conference was happening in parallel with it (I don’t seem to make a good parallel processor). My focus has generally been on the “smart” aspects of this technology, namely touch, gestures, and motion. There wasn’t much new in the motion category; we’ll look at touch today and then update gestures shortly.

The last two years seemed to be all about the zillion different ways of recording touches. This year there was less of that, but Ricoh’s John Barrus dug deeper into the whole pen/stylus situation. For so long, it seems that we’ve been so wowed by touch technology that has enabled more or less one thing: point and click. (OK, and swipe and pinch.) We’ve been all thumbs as we mash our meaty digits into various types of touch-sensitive material (mostly ITO).

The problem is, our fingers are too big to do anything delicate (well, speaking for myself anyway, as exemplified in my abortive attempt to play a word scramble game on the back of an airplane seat). And they cover up what we’re trying to touch. Which is largely why I’ve resisted the wholesale switch from things like keyboards and mice to everything-index-finger. (Yeah, some teenagers think they can two-finger type quickly… perhaps they can, for two fingers, but I’ll blow them away any day with a real keyboard, which is important when what you do is write for a living…)

So I was intrigued to see this presentation, which took a look at a wide variety of stylus and pen approaches, both for personal appliances as well as large-format items like whiteboards. Two clear applications are for note-taking and form-filling. I needed a new laptop recently, and I researched whether stylus/touchscreen technology was fast and fine enough for me to get rid of my notebooks and take notes onscreen. (I don’t like to type in meetings – it’s noisy, and I somehow feel it’s rude.) The conclusion I came to was that no, it’s not ready for this yet. So it remains an open opportunity.

He also noted that tablets with forms on them were easily filled out in medical offices by people that had no computer experience; just give them the tablet and a stylus, and it was a very comfortable experience. (Now that’s intuitive.) (And if you wonder why this works for forms but not for notes, well, you haven’t seen my handwriting…)

The technologies he listed as available are:

  • Projected capacitance (“pro-cap”)  using electromagnetic resonance with a passive pen (i.e., no powered electronics in the pen)
    • The pen needs no battery
    • Good palm rejection – ignores the surface when the pen is in use
    • Good resolution
    • Pressure-sensitive
    • But higher cost
    • Samsung has tablets using this (Galaxy Note)
  • Pro-cap with an active pen
    • Can be done in a large format (Perceptive Pixel has done up to 82”)
    • Similar benefits to the prior one
    • But again, higher cost plus the pen needs a battery
  • “Force-sensitive resistance”
    • Grid of force-sensitive resistors
    • Pressure-sensitive
    • Multi-touch
    • Scales up well
    • But it’s not completely transparent.
    • Tactonic Technologies uses this.
  • There’s an interesting “time-of-flight” approach where the pen sends a simultaneous IR blip and ultrasonic chirp; the delays are used to triangulate the pen position
    • The cost of this is lower
    • Multiple pens can be tracked independently (say, for whiteboards with more than one person)
    • But it’s not really a touch technology; it’s pen-only
    • The Luida Ebeam Edge and Mimio Teach use this

Then there are a bunch of light-based approaches, some of which we’ve seen before. But, unlike the screens that carry light through the glass, most of these project and detect light above the surface. One simple such approach is to use light and detect shadows. This approach is used by Smart Technologies and Baanto.

Other large-format (i.e., whiteboard) installations rely on a camera and some kind of light source.

  • In some cases, the source is the pen itself, which emits an IR signal (Epson BrightLink).
  • In another, a projector casts a high-speed structured pattern onto the surface that’s sensed by the pen (TI’s DLP technology in Dell’s S320wi).
  • Another version sends an IR “light curtain” down over the surface; a camera measures
    reflections as fingers (or whatever) break that curtain (Smart Technologies LightRaise)
  • There’s even a whiteboard with a finely-printed pattern and a pen with a camera that detects the position and communicates it via BlueTooth (PolyVision Eno).

The general conclusion was that the various technology options work pretty well; it’s mostly cost that needs to be solved.

There are also some issues with collaboration for whiteboard and video conferencing applications, but we’ll cover those later.

Tags :    0 comments  
Oct 31, 2013

Simpler MEMS Models for ASIC Designers

posted by Bryon Moyer

Some time back, we took a look at the library of mechanical elements in Coventor’s MEMS+ tool for building MEMS device models. In the “be careful what you wish for” category, making it easier to connect elements into models meant that engineers started connecting more elements into models, and the models got bigger.

Big models can stress a tool out, resulting in slow results and resource starvation.

Well, they’ve just released version 4 of MEMS+, which at the very start, addresses those concerns, enabling quicker handling of more complex models.

But there’s a much more subtle way that they’ve addressed the needs of ASIC designers. Each MEMS element will need an accompanying ASIC to clean up the signals and abstract away a lot of the mechanicalness of the element so that electrical types – or, more likely, digital types – can understand the sensor outputs in their own language.

And, of course, you’re going to want to get started on that ASIC design as soon as possible. But the whole purpose of the ASIC is to turn messy sense element behavior into clean outputs, and in order to do that, you need to know exactly what messy signals you’re going to start with. And you don’t want to wait until the device is finished to do that; you want to model the behavior ahead of time.

The thing is, mechanical folks use finite element analysis and other such schemes for simulation; the ASIC designers will be using Verilog-A. MEMS+ is integrated into Cadence’s Virtuoso tool, so Virtuoso users can actually do their modeling using MEMS+ via a proprietary scheme. But Verilog-A can be used anywhere, and not everyone uses Virtuoso.

What that’s meant in the past is that the MEMS designers have had to hand-craft early Verilog-A models for the ASIC guys to get started on. Those models are tedious to create, and in their effort to keep the task reasonable, things would get left out of the model. And sometimes those left-out things would matter. Which meant that you wouldn’t find out about them until silicon came out, and you’d have to take another turn at the ASIC.

The next step was that MEMS+ could create a full Verilog-A model automatically. This would include all of the non-linearities and such, but it was a huge model, with thousands of “degrees of freedom” (i.e., knobs and variables) and would realistically take far too long to simulate.

So with this release, MEMS+ will let you create a simplified model by selecting specific behaviors and ranges to focus on. These could reflect particular modes or non-linearities of interest. MEMS+ can then fix the other parts, reducing the degrees of freedom from thousands to tens. Which results in a dramatic speedup – like 100X.

This approach can be used on a wide range of sensors – when the movement of the element is a fraction of the surrounding air gap. There’s actually a behavior that is not supported by this model, and it affects some sensors and typically applies to all actuators: it’s called “pull-in.”

The idea is that, when you apply an electrostatic field that pulls on a mechanical element, the element will resist thanks to the mechanical restoring force – essentially, it’s a spring pulling back against the field. But at some point, the field overwhelms the restoring force, and the behavior is no longer linear – the element gets “pulled in” to close the gap.

I sort of picture it this way (if you’re squeamish, you might skip this bit): picture standing some distance in front of an operating airplane jet engine, facing away from it. With earplugs. Good ones. You feel the pull behind you, but you can lean forward and stand your ground like a boss. Feeling brave, you step back a bit. The pull gets stronger, but you man up and show the universe who’s in charge: you are. Yeah, baby. You repeat this, working harder and harder against the engine’s suction, until suddenly, “whoosh.” Um… yeah. Say no more. Non-linear to say the least.

That discontinuity is pull-in. And it’s not included in these simplified models. It’s probably not a good thing to have in a sensor (although you’d want to know if it’s going to be an issue); it’s actually a useful feature for actuators since it gives you a good, positive contact.

One bit of good news with these simplified models: they run independently of MEMS+. So, unlike the Virtuoso-integrated approach, which requires MEMS+ in the background, you don’t need a MEMS+ license to use the simplified model. Obviously a MEMS guy needs a license to create the model, but the ASIC designer doesn’t need a separate license to run it.

You can find out more about MEMS+ 4 in their release.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register