feature article
Subscribe Now

A 1T (or 2T) SRAM Bit Cell

Significant Change to SRAM Economics?

There are few semiconductor circuits as constant as the SRAM cell. Until now.

SRAM is important because it’s our fastest-performance memory tier – and it can take up a goodly chunk of your SoC area. But it’s an expensive beast (or else we’d use even more). For as long as I can recall, the basic reference circuit of the SRAM bit has stayed the same. It costs six transistors, arranged as a couple of back-to-back inverters with cross-coupling feedback to hold the state.



(Image source: Wikipedia)

There have been numerous tweaks and optimizations on this cell, but it has remained doggedly consistent for years. Now, however, for the first time I’ve seen, a proposal is on the table, announced at IEDM by a startup called Zeno, for a new SRAM cell. One that’s completely different from the one we know and love. And it requires only one (or two) transistors.

Seriously. Let’s not bother with in-between points of, oh, 5 or 3 transistors; naw, let’s get all quantum and jump all the way from 6 down to 1.

And, in case you’re wondering, it’s not a DRAM in disguise; it is truly static, with a non-destructive read and no refresh requirements.

Yup… I see those eyes narrowed in suspicion. If you could do it with one transistor, why are we just now finding out about it? What bizarre silicon production tricks and metamaterials are being brought to bear here?

Well, to hear Zeno tell it, there is nothing like that in their new cell; this is completely compatible with existing CMOS production. It requires one extra non-critical litho step and one extra implant. The CMOS compatibility is demonstrated by prototype chips that were built on a shuttle wafer, alongside other CMOS circuits.

So what the heck is going on here? They’re calling in Bi-SRAM because there’s an intrinsic bistable bipolar transistor in the structure they build. They take a basic MOSFET and then implant a buried N-well underneath. The buried N-well, the P-well, and the source/drain regions make up the NPN bipolar transistors.


(Image components courtesy IEDM)

Schematically, I prefer to draw it as follows. (It’s upside down from how they show it, but it’s easier for me to picture with the bipolar transistor collector-side up.)


And, when it comes down to it, the idea is pretty simple, and it hinges on that floating base. You can think of it as a normally closed container for carriers. In one state, the base is filled with carriers. This turns the bipolar transistors on, opening a current path through the buried layer (which is connected to VDD – or should we call it VCC?) In the other state, the floating base is depleted of carriers, turning the bipolar devices off. You read the state through the MOSFET. If there’s lots of current, then some of it is coming through the collector.


That’s the underlying state mechanism, but you also need to be able to write that state – that is, either fill the floating base region with carriers or drain the carriers away. They do this by leveraging the capacitance between the FET gate and the floating P-well.

They didn’t go into detail on the algorithm, so this is my super-simplified summary of my understanding of their high-level description. They program one state by raising the word line and jerking up – er – raising with a carefully-controlled rise time – the word line. This couples to the floating base, punching the base up and turning on the bipolar transistors. This becomes a self-sustaining state based on carriers flowing in from the collector and now locked in the floating base.

To write the opposite state, you pull the source line down to a negative voltage to drain the carriers out of the base, snapping the bipolar transistors off. Yeah, you caught that, didn’t you… This does require a negative voltage pump. There’s roughly 6-8% more support circuitry for the cell, so you don’t get a full 5/6 size reduction net net. Only 4/5. Hope you can live with that.


The performance of the cell as shown is roughly comparable to that of a normal 6T cell; perhaps a bit slower. For high-performance applications, they have a higher-speed version that adds an extra select transistor. The 1T version exhibits leakage current through the channel on unselected cells, and this makes for a slower read. The select transistor cuts off the leakage, increasing the current’s dynamic range, which makes it easier – and faster – to read.


While this doubles the number of transistors, they can mitigate the effects through some clever layout; they refer to this as a 1.7T cell since it’s only 1.7 times the size of the basic 1T cell. But, as a bonus, it’s 40% faster than a 6T cell.

So, depending on which device you choose, it’s always smaller than a 6T (divide by 5 or 2.5), but may be slightly slower or 40% faster. Implemented in a 28-nm node, the 1T cell has area 0.025 µm2; the faster cell is 0.044 µm2.

Static power, meanwhile, drops by a factor of 5 (for either cell). Dynamic power is roughly the same as that of a 6T; maybe a little lower. For a battery-sipping IoT edge node, most of the time is spent sitting around snoozing – er – static, so one-fifth the power on a big (although now smaller) chunk of silicon could be exciting.

Now… I know what you’re thinking: maybe this works for the 28-nm node. But what about future nodes? Is there anything about this that might break a node or two down the road? And what about FinFET? They say that their approach scales with new nodes and that they can create a similar bistable mechanism for FinFETs. They see no obvious sunset clause.

A final question that I naively asked: is this only for embedded applications, or could it be used for stand-alone SRAMs as well?

And the answer there is, “Who still uses stand-alone SRAM?” OK, yeah, there is still a good business for a few players, and this technology could be put to use there. But that’s not the growing sector of the market in the way that embedded SRAM is. And so their focus will remain on the embedded side.

As you can well imagine, this is a licensing play, so, having taken the wraps off, these guys are now busily talking to potential licensees. Probably no surprise that this might turn some heads. If all works as promised, this could be one of those rare impulse functions acting on the market. You might be able to tell your grandkids one day that you were there when SRAMs went from six transistors to one… Pretty dramatic stuff.

More info:

Zeno Semiconductor

16 thoughts on “A 1T (or 2T) SRAM Bit Cell”

  1. Isn’t the real high volume use of static ram in the paging tables and caches of processors? Maybe the control memory for FPGA’s?

    Curious if there are any SEU benefits over 6T designs.

  2. If the essence is the charge in the base region then why this cell is better than the dynamic RAM cell is?
    Or why 2 npn transistors are needed, but not a single one?

    In my opinion, the gate terminal negative voltage forces some quasi current source, which feedes the bases of transistors. A positive gate voltage latches the 2-transistor network in some stable state with minimized base current of one of transistors.

  3. Thank you for the comments.

    With regards to SEU, we haven’t conducted SEU tests at this point. However, we expect SEU benefits over 6T-SRAM due to the smaller cross-section area for SEU.

    With regards to DRAM comparison, the cell is static, so no refresh operation is needed. Read operation is also non-destructive.

    The static behavior is governed by the npn bipolar formed by buried N-well, P-well, and source/drain regions. There are 2 bipolar since MOS has source and drain, but 1 is sufficient to achieve bi-stability.

    We’ll update our website (www.zenosemi.com) with more details about the bi-stability mechanism. Please check back shortly.


  4. In terms of functionality, you’re correct that it can be thought of as 3T. However, with regards to the area footprint, it is 1T (the cell size at 28 nm is 0.025um2, compared with 6T size of 0.127um2). The schematic cell drawing in this article includes a well tap, which is shared among many memory cells.

  5. You are depending on the floating pwell to store the charges. Depending on the amount of charge is stored, it still could leak out through the gate as well as the SD and buried layer junctions. If the buried layer is powered up to VDD it could leak charges into the floating pwell to convert an unprogrammed cell into a program cell. We see this phenomenon in partially depleted Soi transistors. In essence this is a dynamic RAM equivalent but probably smaller footprint of 4f2 instead of the 6f2 of most drams.

    Deep NWELL historically has more pipes (defects due to the higher implant energy) and that will exacerbate the leakage path into the floating pwell. It will be very dependent on anneal process while regular cmos is much less sensitive to defects of the buried layer.

  6. Pingback: web link
  7. Pingback: DMPK
  8. Pingback: Judi Bola Online
  9. Pingback: taruhan bola
  10. Pingback: domino online

Leave a Reply

featured blogs
Apr 22, 2021
President George H. W. Bush famously said that he didn't do "the vision thing". Well, here at Cadence we definitely do the vision thing. In fact, the Tensilica Vision DSP product line... [[ Click on the title to access the full blog on the Cadence Community si...
Apr 21, 2021
Robotics are everywhere, or so it seems. Robot combat, STEM outreach, robotic arms and advanced surgical devices are but a few applications that touch our everyday life. As a student, I was the President of the Santa Barbara City College Robotics Club and a FIRST Robotics com...
Apr 21, 2021
Introducing PrimeLib, an SoC design tool that maps the latest chip technologies & enables correct-by-construction design for SoCs at advanced process nodes. The post Advanced Nodes Drive Demand for Advanced Library Characterization and Validation Solutions appeared firs...
Apr 20, 2021
By Sherif Hany and Abdellah Bakhali Regardless of which technology node they're using, design houses… The post Give me my space! Why high voltage and multiple power domain designs need automated context-aware spacing checks appeared first on Design with Calibre....

featured video

Learn the basics of Hall Effect sensors

Sponsored by Texas Instruments

This video introduces Hall Effect, permanent magnets and various magnetic properties. It'll walk through the benefits of Hall Effect sensors, how Hall ICs compare to discrete Hall elements and the different types of Hall Effect sensors.

Click here for more information

featured paper

Understanding Functional Safety FIT Base Failure Rate Estimates per IEC 62380 and SN 29500

Sponsored by Texas Instruments

Functional safety standards such as IEC 61508 and ISO 26262 require semiconductor device manufacturers to address both systematic and random hardware failures. Base failure rates (BFR) quantify the intrinsic reliability of the semiconductor component while operating under normal environmental conditions. Download our white paper which focuses on two widely accepted techniques to estimate the BFR for semiconductor components; estimates per IEC Technical Report 62380 and SN 29500 respectively.

Click here to download the whitepaper

Featured Chalk Talk

SLX FPGA: Accelerate the Journey from C/C++ to FPGA

Sponsored by Silexica

High-level synthesis (HLS) brings incredible power to FPGA design. But harnessing the full power of HLS with FPGAs can be difficult even for the most experienced engineering teams. In this episode of Chalk Talk, Amelia Dalton chats with Jordon Inkeles of Silexica about using the SLX FPGA tool to truly harness the power of HLS with FPGAs, getting better results faster - regardless of whether you are approaching from the hardware or software domain.

More information about SLX FPGA