There’s been a quiet development brewing that you could file under “W” for “What’s old is new.” Non-volatile memory (NVM) is seeing renewed attention as Logic NVM, but with a twist – gone is the requirement for a boutique process. There’s enough activity here to warrant a one-day self-titled convention specifically dedicated to developments and usage (and particularly, quality and reliability). But I’m getting ahead of myself. To understand what’s new, we must understand what’s old.
There have essentially been three fundamental non-volatile technologies that have sold anything substantial. The oldest, metal fuses, has long since disappeared from mainstream (if not all) usage. The two that remain are floating-gate and anti-fuse. Floating gates are disconnected pieces of conductor; a controlled amount of charge is transferred to or from them, typically by either hot-electron injection or Fowler-Nordheim tunneling; anti-fuses are created by rupturing oxide, either between two layers of metal or between metal and poly or silicon.
Both have been delicate technologies, taking a lot of tuning to ensure that they’re reliable. But for a variety of reasons – primarily oxide thickness and specific anti-fuse construction – these have both required changes to your standard garden-variety logic process. It’s not easy to do; lots of characterization and reliability testing is required, and because you’re on a custom process, you don’t get the benefit of lots of high-volume material, either for generating qualification samples or for driving down cost. Worse yet, you’re always behind the curve – you can’t get onto the fastest, most aggressive process nodes.
The Logic NVM development has focused on implementing non-volatile memory on standard logic processes. They’re doing things the processes weren’t necessarily designed to do, so it may take a bit longer to get the process qualified after it’s ready to roll for logic, but it doesn’t actually require a change to the process. This allows a future roadmap that, for practical purposes, is as aggressive as that available for logic.
You’ve got one chance
There are two basic approaches to Logic NVM: one-time programmable (OTP) and multiple-times programmable (MTP). There’s actually something in between, called few-times programmable (FTP); that’s going to be the topic of a follow-up article. Here we’ll focus on straight MTP and OTP.
There’s really no relationship between the two technologies; it’s not like you do something to OTP to make it MTP. So there’s not a logical place to start; therefore we’ll arbitrarily start with OTP. OTP uses an oxide rupture process – essentially an anti-fuse – to create a connection where there wasn’t one before. This is done by placing a strong field across the oxide, strong enough to cause breakdown and essentially “spike” the rupture location to make it conductive. Once that’s done, there’s no going back; hence one-time programmability.
There are two main companies pursuing this route, Kilopass and Sidense. If you’ve been paying attention to FPGA Journal, you may recognize Kilopass as the source of the configuration cells for SiliconBlue’s technology. Both companies sell IP for use in SoC designs, which is why using a standard logic process is critical. They have different structures: Kilopass refers to their structure as an XPM cell ; Sidense has what they call a 1-T or split-gate cell. Both rely on breaking down the gate oxide of a transistor to create a connection from gate to channel. Kilopass uses a two-transistor (2-T) cell, one with a thicker gate oxide for selecting and one with a thinner oxide for programming. Sidense sort of slides the two together so that it looks somewhat like you have one transistor with a gate oxide that goes from thick to think in the middle.
They each have something to say about why they’re better. Or why the other is worse. Of course, a 1-T cell would presumably be smaller and allow greater density. Sidense also claims that the place in the transistor where they break down is very predictable, giving a very tight fuse resistivity distribution as compared to the 2-T cell. They say that the 2-T cell may break down into unpredictable parts of the channel, where doping levels may vary (e.g., is it in the “Halo” implant intended to lower leakage? In the lightly-doped drain portion (LDD)?), creating a wider distribution – including so-called “tail bits,” the bits at the tail of the distribution that may have to be programmed more than once to get them up to snuff. On the other hand, Kilopass claims that their cell is in all ways standard for the process, whereas the split-gate cell requires the violation of some DRCs to work, potentially making a fab more chary about building it and raising reliability questions.
It’s not clear which of these marketing messages will win, or whether any of the issues they cover will even have anything to do with who wins.
The applications for this technology vary by density. At the low end, trimming, security keys, and configuration are typical; at the high end, data store and code store are more the norm.
That was so much fun, I want to do it again and again
For those applications requiring the ability to read and rewrite, MTP technology is used. This relies on Fowler-Nordheim (FN) tunneling, reflecting an E2PROM (and half of FLASH) heritage. In the old days, the tunnel oxide was a carefully-constructed thin oxide as compared to the gate oxide; while today’s FLASH tunnel oxide hasn’t gotten any thicker, the gate oxide has shrunk far below. But the requirement for the tunnel oxide is part of what makes FLASH processing boutiquey. With Logic NVM MTP technology, the I/O oxide – the thickest available oxide, needed to accommodate 3.3-V and 2.5-V – is used. You need as thick an oxide as possible to ensure that the voltages required to program and erase are well outside the bounds of the voltages that exist in normal operation. That helps to ensure that the charge placed on the floating gate stays there.
Of course, one could legitimately ask the question, if I can do NVM on a logic process, why would I ever use traditional E2 or Flash? Well, nothing is free, of course. In a traditional process used for cells that are programmed by FN tunneling, the floating gate is polysilicon sandwiched between the silicon on one side and another poly layer above it (separated by oxides, of course). That upper poly provides a capacitor for use in programming. The coupling is key, and that coupling is determined (for a given insulator) by the thickness and area of the capacitor. In order to keep the area as small as possible, the oxide thickness is tightly controlled as a part of the process.
Well this upper poly layer is eliminated in the Logic NVM incarnation. Instead, the cell is “unfolded” – the job of that upper poly is actually handled by a diffusion region in silicon. Instead of stacking the capacitor on top of the floating gate, the capacitor is placed next to the cell; the floating gate poly is run sideways to overlay this diffusion. So just unstacking makes the cell bigger.
But it doesn’t stop there. While the coupling between the two poly layers in the stacked version is set by the oxide thickness, which can be controlled for the specialized process, in the Logic NVM version, the oxide thickness is simply whatever the process provides – if you were to have a separate step to control that thickness, it would no longer be a standard logic process. So you’ve got what you’ve got. As a result, the capacitive coupling can only be controlled by the area of the capacitor – and, as luck would have it, it has to be big. The result is a cell that is more or less 20 times larger than a standard E2 cell.
What this all means is that the cost savings by going to a Logic NVM process for MTP really accrue only for smaller memory sizes, where the larger cell size is more than made up for by the cheaper process. If a large memory is needed – say, on the order of megabits, the size of the Logic NVM cell starts to become a liability, and the cost of going to a full-on E2 cell starts to look more attractive. This is also a reason why you probably won’t see efforts to do multi-level Logic NVM cells. The sophisticated circuitry required to sense multiple bits on a single cell adds area, and you amortize that area over the number of bits. So if you have a large memory, you’ve saved a ton by going multi-level. But for smaller memories, it’s just not worth it (especially when you consider the not-inconsiderable R&D required to get multi-level cells working).
Virage and Impinj use this technology. And – breaking news – Virage and Impinj are now one – kind of. Virage has just purchased Impinj’s NVM technology (Impinj will continue with their RFID business). Both erstwhile-separate companies use a differential cell to increase data retention. What that means is that, instead of charging up one cell and then measuring that charge against some fixed standard to see if it’s programmed, they program two cells – one with a 1 and one with a 0, and compare them against each other. Done one way is programmed; reversing the sides is erased. If the charge leaks away from one or the other side, then the “compare” point drifts with it – that is, the center point keeps shifting, making it self-referencing and lengthening the life of the cell.
Naturally this kind of technology works best in applications needing more or less unlimited rewrites – data storage at the high end, things like rewritable encryption keys on the low end. Of course any of the OTP applications can use MTP as well; that becomes more of a cost and technology question. The only lurking issue for future iterations of this technology is one of scaling: below oxides of about 50 ?, electrons can simply escape from the floating gate, even with no bias applied. So the future beyond 45 nm isn’t clear if 3.3-V and 2.5-V oxides are dropped. But this is a topic for another article.
There’s also a company called eMemory doing a different variation. (And this is confusing: all of the other companies mentioned have pretty obvious website URLs. But eMemory’s is ememory.com.tw. If you do ememory.com, without the .tw, you end up on Denali’s website. Doh!) They use hot-electron injection for programming, like the old UV-EPROMs. So they market themselves as MTP and OTP. In normal packaging, they’re OTP. If you want to reprogram, you need to be able to expose them to ultraviolet light to erase them.
How do I know they’ll work, and keep working?
MTP memories are the easiest to test because all the cells can be explicitly tested and then erased. Dealing with OTP, on the other hand, is tougher – it’s like trying to test matches by lighting them to see if they work. Obviously you can’t test the actual anti-fuses themselves. But just as in the old metal fuse days, there are ways of getting close. Such structures as test rows, test columns, redundancy and repair are used to minimize any programming problems and then to reduce the impact of any that exist. What that means is that pretty much the only thing that doesn’t get tested is the actual anti-fuse; all the circuitry intended to program it does get proven before being shipped.
Does that mean that there’s some risk that an anti-fuse might fail? Honestly? Of course, it has to. Question is, is the risk small enough to ignore? And clearly that’s what these guys have to prove in order to demonstrate that they are a reliable source of memory technology. Welcome to the life of an OTP salesguy.
As to reliability, let’s face it: all of these technologies are intended to remain in their programmed state for a long time. They all go for 10-year retention or longer – 20 for OTP — and 100,000 rewrites is typical. It is somewhat useful to compare reliability concerns. The biggest risk for MTP is loss of charge from the floating gate. The differential cell is supposed to help with that. In the old days, OTP technologies battled various migration and regrowth issues (“growback” was once a VERY dirty word). Today the vendors indicate that there are no specific mechanisms that threaten reliability; that once ruptured, the connection made is extremely stable.
Given that Logic NVM has found its way into extremely fussy types of systems like automotive and datacom, where high reliability and uptime are critical, it appears that some very hard-to-please people have decided that this stuff is stable. That said, the bulk of the sessions at the Logic NVM Conference were about reliability, so it’s clearly something that’s getting a lot of attention.
In a couple weeks we’ll look at few-time programmable memory and a couple of recent announcements of some creative new ideas.
13 thoughts on “New Approaches to Long-Term Memory”