posted by Bryon Moyer
Not long ago I noted the sudden appearance of various reference designs and platforms and kits intended to take some of the friction out of the process of adopting sensors, especially for the non-sensor-savvy.
Well, it wasn’t an isolated phenomenon: they keep coming.
Since then, I’ve noted the following:
- This one isn’t strictly a sensor kit, but it fits into the whole IoT picture: NFC. ST announced a “discovery kit” that “…contains everything engineers need to start adding NFC connectivity to any kind of electrical device…” It contains the tag, microcontroller, antenna, screen, joystick, and connectors. A premium version includes Bluetooth with audio out and a headset.
- InvenSense announced a wearable platform, which contains “…all of the key functions of a health and fitness wearable device…” Those would comprise motion and pressure sensors, microcontroller, Bluetooth Low-Energy, and their Automatic Activity Recognition software, which provides “always on” functionality.
- Movea announced a sensor hub kit for mobile devices. It’s a “…complete software and hardware package on a Nexus phone…” running Android 4.4 (KitKat). Quoting from their announcement, it includes the following functions (with power indicated on ST’s STM32F401 microcontroller):
- Significant motion detection (<40 mW)
- Step counting (<100 mW)
- Activity monitoring and context awareness (<300 mW)
- Cadency, speed and distance when walking and running
- Energy expenditure
- Context detection for walking, running and in transport
- Extensive library supporting a wide range of sports at >95 percent
- Pedestrian Dead Reckoning (<1.8 mW)
- Step cadency, distance, heading, floor detection
And I assume these won’t be the last… I’ll update occasionally as these fly over the transom.
posted by Bryon Moyer
One of the immense challenges of aggressive-node design is coping with all of the variations both in the the silicon, given processing variability, and in the operating conditions. The approach has been to find ways to model the variation and create a design that is robust under all the various combinations. Not easy, since each chip comes out of the fab slightly different from its siblings.
And if you want to operate the chip over a wider range, you’ve just made the problem harder. In particular, for a circuit that will operate under a wide range of VDD values, it’s crazy hard to implement the design of a complex circuit and have that single design work under all conditions.
One of the ways of dealing with this has been dynamic voltage and frequency scaling (DVFS), but many approaches to this rely on a static mapping of temperatures, voltages, and frequencies such that, for a given temperature, there is a fixed setting that the circuit will move to.
ST and Leti used an even more dynamic approach, and one that arguably stops trying to characterize its way around the problem and, instead, ask each circuit when it’s about to go out of bounds. They did this for a DSP built on their ultra-thin body buried-oxide fully-depleted silicon-on-insulator (UTBB FD-SOI) process (yeah, that’s a mouthful).
The idea here is that, rather than designing for the various corners, they designed for typical case and then built sensors on the die to indicate when they needed to adjust voltage or frequency. They used two basic approaches, which they called CODA (ClOning DAta paths) and TMFLT (TiMing FauLT).
With CODA, they picked 16 representative critical paths and literally replicated them in pairs. One was a forward path, called a “canary” path (presumably because it’s an early indicator, like the famous canary in the coal mine). The path was then replicated in reverse so that it could be looped to oscillate; they could then measure that frequency directly. They issue a warning when the clock frequency gets to 1/(clone delay), and they correlate this through the frequency measured using the loop oscillation mode. They found that five of the pairs could predict the actual circuit fMAX within 3-4% with a 1-V supply.
The TMFLT circuits are quite different. They instrument 128 critical paths (although they may or may not be the most critical paths) with sensors that warn when slack time has decreased to 160 ps. They refer to these as TMFLT-S (S for “sensor”). While these can be activated by some pre-determined test pattern, they may not be activated during actual use. In other words, when conditions get tough (e.g., temperature heating up), you can’t necessarily rely on one of those paths just happening to be active so that it can warn you that timing is getting dicey.
So they created one more feature, a “programmable replica path” that doesn’t use any of the logic per se, but instead relies on a stored signature to set the delay. This is called TMFLT-R (R for “ring”). The way this signature is created is to run the TMFLT-S paths through the test pattern at, say, power-up. Power, frequency, and back bias are swept, finding the optimal points, and then measuring the corresponding TMFLT-R values and storing these signatures. During operation, the active frequency, voltage, and back bias can be measured, and the appropriate signature is used to set the TMFLT-R timing. So now TMFLT-R is acting as a proxy for all the TMFLT-S circuits, which may or may not be activated. Sounds complicated, but, at least at this very moment, I’ve convinced myself that it makes sense. (My brain’s relaxation constant is pretty quick, so all bets may be off in an hour.)
What’s interesting about these approaches is that they allow operating conditions to by dynamically altered not based on some static algorithm that was done at, say, characterization time, but by measuring what’s really going on in each individual circuit at any given time and looking for true indicators that performance is in danger.
They achieved a voltage range of 397 mV to 1.3 V. That’s more than a 1:3 range (compare that to old-school 4.5-V min circuits: the upper VDD would be, like, 13.5 V – which sounds crazy). fMAX was 460 MHz at the bottom end of the range and 2.6 GHz at the top end.
They talk about it in their release, and for those of you with ISSCC proceedings, you can get even more info in paper 27.1
posted by Bryon Moyer
In various places where people track and discuss progress in the world of interconnected things, there is a surprising amount of debate over the meanings of terms that might otherwise be taken for granted.
Most often, you see a debate over the “internet of things” (IoT) as compared to “machine to machine” (M2M). And, in fact, M2M technology has been around for a long time, so some of the tone can be annoyance: “Hey folks, we’ve been doing this for a long time, there’s nothing new, and it’s got a name already : M2M, not IoT. Quit hijacking and hyping our technology.”
Well, I’m going to join the fray here with my opinion, and you can flay me if you disagree. (Just be gentle.) I’m going to toss in one other phrase that I saw included in one of the debates: the seemingly innocuous “connected device” (it’s the innocuous ones that all too often end up being not quite so innocent).
Let’s start with that one. A “connected device,” in my eyes, is simply one that can access the Internet. I suppose it doesn’t have to be the internet – it could be some private server or something else. But… probably the Internet. The thing is, the device isn’t really talking to any other device; it’s just providing you access to information that resides somewhere outside itself.
The other two terms deal with devices that go online to interact with other devices. This is where most of the debate is. Much of the technology used for the IoT could well be the same as that used for M2M, so there’s room for lots of overlap there.
I think that if the IoT were really only about things talking to things, then you could argue that it was more or less the same as M2M. But in its more typical use cases, the IoT tends to involve people more than M2M does. The IoT is more like person-to-cloud-to-machine. It’s the person and cloud that feel different to me.
Of course, M2M must, in the limit, involve people. But a more classic industrial implementation of M2M would seem to consist primarily of machines and a local or private server (or server farm – and, despite that fact that such farms have been around forever, you’ll even see them being rebranded as “private clouds”). A factory or other industrial process can hum along nicely, with the Grand Algorithm keeping things optimal, all under the watchful eye of a Homer Simpson (or a more suitably qualified person).
That feels very machine-centric to me, as opposed to the refrigerator that can detect when it’s out of something so that some company can send you an ad on your phone. The IoT model feels to me like it’s more human-centric (or should be).
- Connected device: just a device with access to outside information
- M2M: machine-centric network where the endpoints are mostly machines
- IoT: mixture of machines and public cloud and people doing things that serve the needs of people more than they serve the needs of machines.
OK… bash away. Heck, you’d wonder if it even matters, but it’s amazing how much energy people can devote to this. I’m gonna go put on my flak jacket now.