feature article
Subscribe Now

Ongoing Sensor Work

Reports from ISSCC

Wow. It’s been six years. Where has the time gone?

Six years ago, I attended a session on sensors at the ISSCC conference. I decided to do a series of articles on the various sensors that were presented. This was my intro to the world of sensors, and, since then, we have embraced sensors and MEMS as they have evolved into the bigger-picture internet-of-things (IoT) theme.

The sensors were newish and somewhat obscure at the time. MEMS then blew up huge, although, as with all favorite sons and daughters, there’s always the risk that a newborn will steal the attention. A year later, MEMS Industry Group (now MEMS and Sensors Industry Group) director Karen Lightman felt that the MEMS space was pushed aside for top attention honors by the notion of 450-mm wafers. But, in retrospect, she won: it’s 2017, and MEMS is still important. 450-mm wafers? Not so much.

So here we are, and ISSCC for 2017 happened a couple of months ago. Yes, they’re covering sensors, but these days it’s less from a “here’s how to do it” standpoint and more from a “here’s how to do it better” angle. Ah, maturity. (And it’s a coincidence that we publish this just days after covering a new player in the MEMS switch arena…)

So what follows is a summary of the sensor news I saw this year at ISSCC. I’m not going to dive into tons of details; you can find lots more in the proceedings.

Temperature sensors

Temp sensors were well represented, and for a number of purposes. A team from Delft and Ulm Universities, along with Broadcom, worked to improve a temp sensor that would be used for circuit compensation (paper 9.1). The challenge was to provide high resolution (reducing jitter) while keeping power low – a combination that doesn’t normally go together.

This involved, first, selecting the resistors that would be used to sense temperature. They needed something with a large thermal coefficient, low noise, and as unaffected as possible by voltage and stress. Diffusion resistors fail the last test; poly resistors respond to stress; and metal resistors simply aren’t resistive enough. So they went with silicided poly resistors, whose properties weren’t yet well understood. So part of the purpose of the work was to suss that out.

Meanwhile, getting power down while suppressing noise involved a Wien bridge sensor, a ?? modulator, and more choppers than a Vege-Matic commercial.  They compared n-poly and p-poly resistors, ultimately showing the results of the p-poly version as superior. Their conclusion was that the silicided poly resistors had proven their mettle.

The figure of merit used here, energy per conversion times the square of the resolution (lower is better), bested all alternatives except a MEMS implementation – and that was only barely. The other alternatives ranged from 0.65 to 13 (the worst one was also MEMS); this work achieved 0.13. The better MEMS approach achieved 0.12 – with the caveat that it is not compatible with CMOS processes, making it a challenging candidate for compensating circuits.

A team from the University of Michigan (paper 9.2) focused on temperature sensors for use in IoT – noting that most such systems have a real-time clock available for use as a reference. (They also included a self-contained RC generator for systems lacking that clock.) Achieving high resolution typically means precision ADCs – which draw too much power. Their goal was to develop an alternative.

Their approach was to build a subthreshold oscillator, where the frequency varies exponentially with temperature, although they refined the relationship with a more complex model (requiring two calibration points). They also suppressed such oscillators’ dependency on voltage by using “native” nmos transistors (actually, two of them cascaded) as headers.

Of the technologies against which they compared results, they were far smaller than all alternatives but one (although adding the RC reference in the absence of a system clock makes it the largest circuit by far, ballooning it from 8865 to 220,000 µm2). Their energy per conversion was by far the lowest, and their voltage sensitivity was also the best – in some cases by a couple of orders of magnitude. Their figure of merit was 3.2 as compared to alternatives ranging from 14 to over 300.

The last of our temperature sensor papers (paper 9.3, from Delft University) focused on bipolar-junction-transistor (BJT)-based temp sensors – and, in particular, the challenges in calibrating them. They are normally calibrated at the wafer level, but that requires hours for a one-point trim. Then they’re packaged – a step that can affect the calibration, but they’re not re-calibrated in the package.

Their ultimate solution involved adding a heater to the die itself. The heater could be pulsed on and settle in around a half second. By contrast, heating the entire package requires a long soak, making it slow. With the quick, local internal heating, they were able to do a two-point calibration to ±0.3 °C in 0.5 s.

Touch sensors

Our next story comes from Korea (paper 9.6), involving a team from Hanyang and Chung-Ang Universities, Leading UI, and MiraeTNS. They tackled a stylus for measuring pressure and tilt on a wide variety of screen sizes. Pressure was measured via a force sensor; tilt was measured via a gyroscope, both embedded in the stylus.

Their basic approach involved a new analog front end (AFE) that used what they referred to as a “multiple frequency driving method.” Essentially, the IC would sample the environment and find the low-noise frequency regions. There would typically be multiple such regions along the spectrum. They’d then focus the excitation frequencies in those low-noise bands to get a cleaner response.

Of the works they compared against, theirs was the only one to handle stylus tilt. They demonstrated on a 65-inch screen – more than four times the closest competitor, while achieving a 3.9-kHz frame rate (as opposed to hundreds of Hz for the alternatives) – at the cost of some power.

Note that the screen can also respond to finger touches, not just the stylus. In fact, the SNR was higher for a finger (61 dB) than for the stylus (50 dB).

Elsewhere in Korea, meanwhile, a team from Yonsei University and TRAIS worked to improve the SNR for styli on larger touchscreens (paper 9.7). Their approach was not to do the conventional charge-to-voltage conversion before processing, but rather to stay in the charge domain for much longer. The resulting smaller voltage swings make for lower-power operation. They developed a host of circuits to do this, and I won’t bore you with those details because, if I did, you’d see immediately that my grasp of subtle analog and DSP techniques is… shall we say… brittle. You’re welcome.

That said, their circuit was smaller than the others by a factor of at least 5; their power was less than the closest alternative by a factor of more than 2 (and by as much as 80); and their two figures of merit (one measures nJ/node, the other nJ/step) were at least 3 or 4 times better than the alternatives.


If you want a higher-performance gyro, it helps to use a closed-loop design. But if the gyro needs to go into a battery-powered device like a phone, that approach will use too much power. A team from the University of Freiburg and Hahn-Schickard in Germany (paper 9.4) looked for a way to reduce that power and provide an effective closed-loop gyro. Using a continuous-time ?? modulator lowered the power.

Part of the problem is that you can operate in so-called split mode, where the actuation frequency is different from the sense frequency, giving good bandwidth but low gain. Matching the frequencies provides better gain at the expense of bandwidth. This is the tradeoff they were attacking. They had a feedback loop, but it included a loop filter that had a strong dependence on process, voltage, and temperature (PVT).

So their work involved tuning the drive frequency feedback to match the sense frequency with a 9-bit current DAC for coarse control and a noise observation frequency tuning block for fine control. In the end, their power was less than that of several alternatives, but it had one-fifth the bias instability (at 0.5 °/hr) than one that was slightly lower power. The other lower-power alternative used off-chip frequency tuning.


Yes, audio is still cool. Here we have an attempt to raise the signal-to-noise ratio (SNR) of a digital mic above the incumbent 65-dB level. (Yes, InvenSense has a mic with 74-dB SNR, but it’s an analog mic.) This was taken on by a team at Infineon (paper 9.5). The big change they made was to change from the traditional membrane-plus-backplate structure to one with two backplates – one on either side of the membrane. They then measured the differential signal, canceling even-order harmonics.

Once all the circuits settled, they reduced total harmonic distortion and achieved an SNR of 67 dB(A) (meaning frequencies are weighted according to how the human ear perceives them).


This project, by a team from Delft University (paper 9.8) focused on the Wheatstone Bridge making up the core of a read-out IC for a pressure sensor used for precision mechanical positioning control. Citing a couple of prior art examples, one had power that was too high; the other was better, but featured spikes as an artifact of a chopping circuit.

They created a deadband around the spikes, blanking them and keeping them from interrupting the chopping clock. Their efforts achieved a noise efficiency factor of 5 – half or less than alternatives – and a power efficiency factor of 44 – one-fifth of the closest compared alternative.


The next story should be near and dear to the chip industry, taken on by a team from Delft University and Catena Microelectronics (paper 9.9). It relates to wafer manipulation, sensing sub-nm displacement for more accurate positioning and overlay. The alternatives available are either big and bulky (linear encoders and interferometers) or they require electrical contact (capacitive). They wanted to leverage eddy currents as a way to detect position without making any contact.

The skin effect has limited the resolution and sensitivity of eddy current sensors, but the team found that using a higher frequency (above 100 MHz) solved this. Meanwhile, another problem was that the typical gap across which position is being measured is hundreds of microns – or even millimeters – from which you’re trying to get a noise-free reading of sub-nm positions. They reduced this “stand-off” distance to 105 µm and, with the aid of lots of circuits, achieved a resolution of 0.6 nm as compared to 65 and higher for prior art.

Mass, substance detection

We wind things up with some ideas for detecting substances. This next one is actually more about data capture than the specifics of the sensor, but it relates to large arrays of sensors, which can be one approach to material detection.

When you have such a large array, you need to offload their signals somehow to get a result. That can mean a lot of wires connecting the array to the CMOS electronics – which are likely to be on a different chip if the sensor array isn’t something that can easily cohabitate with CMOS. So a team from Princeton (paper 15.1) looked at a novel way of reducing the width of that result bus (so to speak).

Since this had to be on the non-CMOS part of the system, they looked to thin-film transistors (TFTs) – low performance, but good enough for the task. Direct addressing of sensors would still use a lot of wires, so, instead, they took a page from communications: frequency hopping. Each sensor was connected to a digitally controlled oscillator (DCO); the control came from a bus on the CMOS side. Those lines were connected to the DCOs in unique combinations, so that each sensor (or at least each one along a particular line) would modulate a unique sequence of frequencies.

The idea is that you can have a very large number of different sequences, and, from those, you can use some subset that are different enough to be easily discriminated. For instance, a 3-bit hopping code gets you static binary access to 8 sensors, but, by using sequences, they could access 18 sensors.

I asked how they came up with 18, and it turns out that it’s not the result of some obvious calculation; they modeled the system using Matlab, and, from that, they found the answer to be 18. So each line is then driven as a sum of its DCOs, and, on the CMOS side, they can then pick off the frequencies according to the sequence, whose return will reflect the sensor outputs. This approach scales quickly; using 5 bits gets you access to around 350 sensors.

While the Princeton team tested this concept against an array of pressure sensors, we now move to that common application of such sensor arrays – mass spectrometry. We’ve covered that before in a MEMS context, but the next paper, from CEA-Leti-Minatec (paper 15.6) creates a NEMS version. Instead of a vibrating cantilever or bridge at the micro scale, it’s a “nano-gauge” – a thin wire anchored at both sides that then exhibits standard vibration modes.

We saw the concept of a nano-gauge several years ago, with CEA-Leti’s MnNEMS combination MEMS/NEMS platform, where micro-scale elements were sensed by nano-gauges. Here, as with MEMS versions, the resonance of the gauge is exploited. If a target molecule lands on the gauge, it changes the resonance, and that frequency shift can be detected. In particular, they looked at two vibration modes for detection. The way they detect the frequency is to monitor changes in resistance along the gauge as it flexes; the alternating tensile and compressive strain modulate the resistance.

The bulk of the paper details how they got this sensing to work. Less clear was how this could operate in a real-world context. One of the challenges of an array like this is discriminating specific molecules – especially if they’re in a mix. If you let a gas into the sensor, then molecules will attach all around, perhaps more than one on a given nano-gauge. So you have to figure out, gauge-by-gauge, what its signature is and which are carrying which molecule – and deal with multiples. Then you have to somehow flush the thing out so you can use it again.

I asked about this, and they mentioned that they have a way to let in one molecule at a time. That certainly solves the multiple-molecule issue, but it’s not clear how it works on a random gas – say, one where you’re trying to see if it contains some contaminant at ppm levels. If you don’t know the makeup of the gas (in other words, if you’re not specifically testing the sensor by feeding it a known stream of molecules), then can you pick up such a weak signal? How you purge the system after a test also remains unclear (unless I missed something obvious).

Finally, we go to a paper that was looking for a very specific molecule – one that should make us all very happy: dopamine. This was tackled by a team from New York University (paper 15.7). It’s important for the study of diseases like Parkinson’s, and it has to take into account that these sensors are detecting levels deep within live brains.

One standard test is called Fast Scan Cyclic Voltammetry (FSCV), wherein a reference electrode receives a triangular pulse with respect to a “working” electrode; the response is measured in the latter. During this test, there are apparently two responses that result: a low-level one that indicates the amount of dopamine and a higher-level background signal that doesn’t. That creates the challenge of isolating the weaker signal.

In order to reduce noise, they wanted to get the processing circuitry as close as possible to the sensors. Because they’re detecting electrons created through redox reactions between the dopamine and carbon, they used graphene attached above the readout ASIC. This is done after the ASIC is complete; the graphene is transferred, followed by further processing to pattern and connect the graphene locations to the contacts in the ASIC.

A significant benefit of this is that they have multiple sensing sites on a single chip (with multiple chips in the probe), making precise detection of dopamine concentration easier. I’m also assuming that, because this measures a process involving whatever molecules are nearby, that the whole purging concept after a measurement doesn’t apply here (as opposed to the prior paper, where a molecule must adhere or adsorb onto the sensor – and from which it will subsequently need to be dislodged).


And there you go. Still lots of work ongoing in sensor-land. It might be incremental work, but that’s what takes interesting concepts and gradually makes them usable in a wider range of applications.


More info:

ISSCC proceedings. (Not available online; you’ll need to contact IEEE or a friend if you don’t already have them.)


11 thoughts on “Ongoing Sensor Work”

  1. Pingback: cpns kemenkumham
  2. Pingback: Stargames Casino
  3. Pingback: DMPK Services
  4. Pingback: jeux de friv
  5. Pingback: Judi Online
  6. Pingback: agen bola terbesar
  7. Pingback: Aws Alkhazraji
  8. Pingback: ADME Studies

Leave a Reply

featured blogs
May 8, 2024
Learn how artificial intelligence of things (AIoT) applications at the edge rely on TSMC's N12e manufacturing processes and specialized semiconductor IP.The post How Synopsys IP and TSMC’s N12e Process are Driving AIoT appeared first on Chip Design....
May 2, 2024
I'm envisioning what one of these pieces would look like on the wall of my office. It would look awesome!...

featured video

Why Wiwynn Energy-Optimized Data Center IT Solutions Use Cadence Optimality Explorer

Sponsored by Cadence Design Systems

In the AI era, as the signal-data rate increases, the signal integrity challenges in server designs also increase. Wiwynn provides hyperscale data centers with innovative cloud IT infrastructure, bringing the best total cost of ownership (TCO), energy, and energy-itemized IT solutions from the cloud to the edge.

Learn more about how Wiwynn is developing a new methodology for PCB designs with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver.

featured paper

Achieve Greater Design Flexibility and Reduce Costs with Chiplets

Sponsored by Keysight

Chiplets are a new way to build a system-on-chips (SoCs) to improve yields and reduce costs. It partitions the chip into discrete elements and connects them with a standardized interface, enabling designers to meet performance, efficiency, power, size, and cost challenges in the 5 / 6G, artificial intelligence (AI), and virtual reality (VR) era. This white paper will discuss the shift to chiplet adoption and Keysight EDA's implementation of the communication standard (UCIe) into the Keysight Advanced Design System (ADS).

Dive into the technical details – download now.

featured chalk talk

It’s the little things that get you; Light to Voltage Converters
In this episode of Chalk Talk, Amelia Dalton and Ed Mullins from Analog Devices chat about the what, where, and how of photodiode amplifiers. They discuss the challenges involved in designing these kinds of components, the best practices for analyzing the stability of photodiode amplifiers, and how Analog Devices can help you with your next photodiode amplifier design.
Apr 22, 2024