Aug 26, 2014

A New 3D

posted by Bryon Moyer

3D has been tossed about quite a bit over the last few years. We can ignore the 3D TV craze that came and went like an evanescent avatar. But the two IC manifestations have been 3D transistors (i.e., FinFETs) and 3D package integration – stacking chips.

The latter is a more-than-Moore technology that allows multiple chips, each built on processes best suited to it, with the ability to leverage high-volume off-the-shelf dice like memories instead of designing them from scratch.

But what if you want to scale like circuits vertically? That’s to say, things that aren’t available off the shelf and that all require the same process? Either you have to build them laterally on a single chip or build multiple chips and stack them.

Well, Leti is working on another option: monolithic 3D integration. What this amounts to is building a standard chip and then growing a new layer of silicon (or something) above it and building more circuits. Sounds pretty straightforward in concept, but it’s easier to visualize than it is to accomplish. They presented their status at the recent Semicon West gathering.

M3D_red.png

Image courtesy Leti

The biggest concern that always arises with these sorts of ideas is thermal. For the bottom layer, you build your transistors, implant your dopants, and then “activate” them using heat to get them moving to where they’re supposed to be. After that, you want them to stay there. They’ll keep moving if you keep the heat on, so once they’re set, you don’t want any more heat.

There are also apparently worries about the contact salicide stability in the presence of extra heat.

And where might the extra heat come from?

Well when you build the next layers of transistor, you need to dope them and activate again. If your bottom transistors are already where you want them, the extra activation will screw them up. Do you try to under-activate the bottom ones, hoping that the second activation will bring them in line?

That’s not the approach Leti is taking. They’re experimenting with a “crème brulee” technique: use a broiler for the second layer activation. That is, heat from the top so that only the top layer gets activated in a short enough time that the heat doesn’t diffuse down and mess up the lower transistors.

Compatibility with existing processes is another consideration. You have to be able to connect the upper and lower transistors, and, in theory, there is no such interconnect at present. Rather than define new interconnect, they’re leveraging the local interconnect (LI) for that piece.

Finally, a big question: how to build and arrange the transistors and CMOS pairs – and other elements like NEMS devices that might want to ride along on the same chip? They’re playing with three different configurations.

The first is “CMOS over CMOS.” In other words, you build both N and P types on the same layer (top and bottom). They list FinFET over FinFET, Trigate/nanowire over Trigate/nanowire (all SOI), or FDSOI over FDSOI. But they also have a drawing showing an FDSOI transistor over a FinFET. Their allegation is that two layers of 14-nm technology provide the scaling of a single layer of 10-nm technology.

The second option is to optimize the transistors by having N and P types on different layers. So, whereas the first option has CMOS pairs built laterally, they’re built vertically in this second option. This allows them to use different materials on the two layers. They’ve already tried germanium (Ge) for P over silicon for N. And they’ve leveraged different crystal orientations, with silicon [110] for P over silicon [100] for N. Next up they’ll try InGaAs for N over Ge for P.

The third option involves integrating NEMS over CMOS. We looked at their M&NEMS program last year (which work continues).

They did some FPGA work already just to see what kinds of improvements they can get . They used two stacked FDSOI layers and two levels of tungsten LI. They improved area by 55% (not surprising), but they also improved performance by 23% and power by 12%. Win win win. Apparently going local matters.

We’ll update as we see new results.

Tags :    0 comments  
Aug 25, 2014

Old-School Analog Outputs

posted by Bryon Moyer

Today we looked at the role of Freescale’s new FXLN83xxQ accelerometer for analyzing vibrations. But one feature of the accelerometer had me cocking an eyebrow: analog outputs.

We’ve covered a lot about sensors here before, and in the huge majority of the cases, a sensor consists of a MEMS (or other) sensing element, an ASIC to clean up and digitize the signal, and then a series of registers where all the relevant data gets placed.

An outside entity, like a sensor hub, can then read those registers over a bus connection – typically I2C or SPI. What could be simpler?

Well, I guess an analog output could be simpler: you eliminate all of that messy digital stuff. But it seems to me that, running an analog signal halfway across town to get it to the analog inputs of a microcontroller (aka MCU, or whatever hub is used) would run the risk of seriously degrading the analog value in a way that wouldn’t happen with a digital signal.

XL_schematic_red.jpg

 

(Click to enlarge)

Image courtesy Freescale.

I asked Freescale about this, and they justify it based on the wide variety of digital interfaces in use, in particular in industrial settings. Heck, they say that even CAN bus is leaving the confines of vehicles and moving into other applications.

Freescale makes lots of microcontrollers. This variety of MCUs partly reflects the diversity of interfaces they may talk to: Rather than having one large unit with all possible interfaces, they offer different devices. And yes, they’re assuming (or at least hoping) that you’ll be using their MCU.

So the idea goes thusly: first off, you simply don’t run the analog signals halfway across town. In these applications, an MCU is likely to be right nearby. (If not, then you want to move it so that it is nearby.) The MCU you choose will then reflect whatever bus you’re using, and that’s where you go digital. They prefer this, obviously, to having to have a bunch of different versions of the sensor to suit the various digital protocols.

There’s one other convenient thing about digital registers, however: they’re good at storing values while the rest of the system goes to sleep for a while to reduce power. Well, apparently these analog outputs can manage the same trick. The internal electronics shut down between samples, but the output is held between samples. This decouples the rate at which the MCU samples the analog outputs from the rate at which the sensor samples the system and allows power as low as 200 µA when running.

That’s how they see it; if you see it differently, then your comments are encouraged below.

Tags :    0 comments  
Aug 21, 2014

A HEMT Cool-Down

posted by Bryon Moyer

Heat has got to be one of the most annoying side-effects of doing useful electrical work. The more work we do, the more things heat up, changing the characteristics of the circuitry and, if we’re not careful, leading to early end-of-life or outright failure.

It’s heat that’s part of why we’ve gone to multicore instead of simply ratcheting up microprocessor clock frequencies forever. Greater dissipation is one reason we end up with power transistors that are larger than they need to be for electrical reasons. And when 3D ICs were first trotted out as an idea some years back, one of the immediate questions was how heat would be removed from the center of the stack.

We do lots of things to mitigate heat: elaborate cooling systems, heat spreaders in packages, and modified silicon designs to reduce thermal density. All of which add cost in one way or another.

Well, for one application, a different solution has been proposed. Gallium nitride (GaN) is a wide-bandgap material used for high-electron-mobility transistors (HEMTs) in high-power RF applications – radar, cellular base station radios, satellite radios, and the like. The GaN typically sits over a silicon substrate, with a transition layer to ease stresses due to mismatches in the crystal lattice spacing of the two materials.

These circuits have localized hot spots that have to be carefully managed (with heat flux that Element Six says rivals that of the sun). Metal is typically used to wick away heat, and we all know that copper is a good conductor of heat, topping out at about 400 W/mK. But we have looked at one material that is a far better heat conductor than copper: diamond. Diamond can conduct heat in the range of 1000-2000 W/mK.

Unlike copper, which uses electrons to conduct the heat away, diamond does so through vibrations of the crystal lattice – so-called phonons (a fictitious particle for analysis of crystal vibrations and their properties and propagation). So higher-quality crystals will spread heat better than high-defect crystals or polycrystalline depositions.

Element Six does sell diamond heat spreaders that can be included under standard GaN/silicon or GaN/SiC (silicon carbide) circuits, and they’ll help, but they place the diamond material some hundreds of microns away from the transistor gate, where the heat originates.

A better solution, they say, is to have a transistor consisting of GaN on a diamond substrate rather than a silicon substrate. The standard transition layer between silicon and GaN is also a barrier to a conductive path from gate to substrate, so they’ve eliminated that as well, replacing it with their own “secret sauce” of a transition layer.

By doing this, you’ve now got the transistor gate about 1 micron away, roughly tripling the heat dissipation.

GaN_on_Diamond_-_combined_2.png

 

Upper image courtesy Element Six; graph credit Professor Martin Kuball, Bristol University

Their actual production process leverages GaN/Si layers already in production. They put a handle wafer on top, flip them over, remove the silicon substrate and the transition layer, and then add their own transition layer and grow a polycrystalline diamond substrate. That substrate is strong, but it’s not thick enough for fab handling, so they temporarily affix another diamond wafer, which is eventually removed and re-used up to 10 times. (They’re working on a cheaper handle wafer solution for this last bit.)

GaN on Diamond allowed Triquint and Raytheon to achieve a three-fold improvement in power density as compared to GaN/SiC, allowing them to meet a challenge set by DARPA.

You can read more about the Raytheon achievement in their announcement.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register