posted by Bryon Moyer
One of the immense challenges of aggressive-node design is coping with all of the variations both in the the silicon, given processing variability, and in the operating conditions. The approach has been to find ways to model the variation and create a design that is robust under all the various combinations. Not easy, since each chip comes out of the fab slightly different from its siblings.
And if you want to operate the chip over a wider range, you’ve just made the problem harder. In particular, for a circuit that will operate under a wide range of VDD values, it’s crazy hard to implement the design of a complex circuit and have that single design work under all conditions.
One of the ways of dealing with this has been dynamic voltage and frequency scaling (DVFS), but many approaches to this rely on a static mapping of temperatures, voltages, and frequencies such that, for a given temperature, there is a fixed setting that the circuit will move to.
ST and Leti used an even more dynamic approach, and one that arguably stops trying to characterize its way around the problem and, instead, ask each circuit when it’s about to go out of bounds. They did this for a DSP built on their ultra-thin body buried-oxide fully-depleted silicon-on-insulator (UTBB FD-SOI) process (yeah, that’s a mouthful).
The idea here is that, rather than designing for the various corners, they designed for typical case and then built sensors on the die to indicate when they needed to adjust voltage or frequency. They used two basic approaches, which they called CODA (ClOning DAta paths) and TMFLT (TiMing FauLT).
With CODA, they picked 16 representative critical paths and literally replicated them in pairs. One was a forward path, called a “canary” path (presumably because it’s an early indicator, like the famous canary in the coal mine). The path was then replicated in reverse so that it could be looped to oscillate; they could then measure that frequency directly. They issue a warning when the clock frequency gets to 1/(clone delay), and they correlate this through the frequency measured using the loop oscillation mode. They found that five of the pairs could predict the actual circuit fMAX within 3-4% with a 1-V supply.
The TMFLT circuits are quite different. They instrument 128 critical paths (although they may or may not be the most critical paths) with sensors that warn when slack time has decreased to 160 ps. They refer to these as TMFLT-S (S for “sensor”). While these can be activated by some pre-determined test pattern, they may not be activated during actual use. In other words, when conditions get tough (e.g., temperature heating up), you can’t necessarily rely on one of those paths just happening to be active so that it can warn you that timing is getting dicey.
So they created one more feature, a “programmable replica path” that doesn’t use any of the logic per se, but instead relies on a stored signature to set the delay. This is called TMFLT-R (R for “ring”). The way this signature is created is to run the TMFLT-S paths through the test pattern at, say, power-up. Power, frequency, and back bias are swept, finding the optimal points, and then measuring the corresponding TMFLT-R values and storing these signatures. During operation, the active frequency, voltage, and back bias can be measured, and the appropriate signature is used to set the TMFLT-R timing. So now TMFLT-R is acting as a proxy for all the TMFLT-S circuits, which may or may not be activated. Sounds complicated, but, at least at this very moment, I’ve convinced myself that it makes sense. (My brain’s relaxation constant is pretty quick, so all bets may be off in an hour.)
What’s interesting about these approaches is that they allow operating conditions to by dynamically altered not based on some static algorithm that was done at, say, characterization time, but by measuring what’s really going on in each individual circuit at any given time and looking for true indicators that performance is in danger.
They achieved a voltage range of 397 mV to 1.3 V. That’s more than a 1:3 range (compare that to old-school 4.5-V min circuits: the upper VDD would be, like, 13.5 V – which sounds crazy). fMAX was 460 MHz at the bottom end of the range and 2.6 GHz at the top end.
They talk about it in their release, and for those of you with ISSCC proceedings, you can get even more info in paper 27.1
posted by Bryon Moyer
In various places where people track and discuss progress in the world of interconnected things, there is a surprising amount of debate over the meanings of terms that might otherwise be taken for granted.
Most often, you see a debate over the “internet of things” (IoT) as compared to “machine to machine” (M2M). And, in fact, M2M technology has been around for a long time, so some of the tone can be annoyance: “Hey folks, we’ve been doing this for a long time, there’s nothing new, and it’s got a name already : M2M, not IoT. Quit hijacking and hyping our technology.”
Well, I’m going to join the fray here with my opinion, and you can flay me if you disagree. (Just be gentle.) I’m going to toss in one other phrase that I saw included in one of the debates: the seemingly innocuous “connected device” (it’s the innocuous ones that all too often end up being not quite so innocent).
Let’s start with that one. A “connected device,” in my eyes, is simply one that can access the Internet. I suppose it doesn’t have to be the internet – it could be some private server or something else. But… probably the Internet. The thing is, the device isn’t really talking to any other device; it’s just providing you access to information that resides somewhere outside itself.
The other two terms deal with devices that go online to interact with other devices. This is where most of the debate is. Much of the technology used for the IoT could well be the same as that used for M2M, so there’s room for lots of overlap there.
I think that if the IoT were really only about things talking to things, then you could argue that it was more or less the same as M2M. But in its more typical use cases, the IoT tends to involve people more than M2M does. The IoT is more like person-to-cloud-to-machine. It’s the person and cloud that feel different to me.
Of course, M2M must, in the limit, involve people. But a more classic industrial implementation of M2M would seem to consist primarily of machines and a local or private server (or server farm – and, despite that fact that such farms have been around forever, you’ll even see them being rebranded as “private clouds”). A factory or other industrial process can hum along nicely, with the Grand Algorithm keeping things optimal, all under the watchful eye of a Homer Simpson (or a more suitably qualified person).
That feels very machine-centric to me, as opposed to the refrigerator that can detect when it’s out of something so that some company can send you an ad on your phone. The IoT model feels to me like it’s more human-centric (or should be).
- Connected device: just a device with access to outside information
- M2M: machine-centric network where the endpoints are mostly machines
- IoT: mixture of machines and public cloud and people doing things that serve the needs of people more than they serve the needs of machines.
OK… bash away. Heck, you’d wonder if it even matters, but it’s amazing how much energy people can devote to this. I’m gonna go put on my flak jacket now.
posted by Bryon Moyer
In one of the early presentations at the Interactive Technology Summit last fall, Sensor Platforms’ Kevin Shaw gave a compelling presentation that wove together the concepts of always-on technology, context, and the disappearance of the interface: it should all happen transparently.
He painted a compelling picture of intelligent, benevolent always-on electronic eyes that watch us and learn who we are, what we want, and, critically, anticipate our next moves, practically laying out our suits for us before we even realize that we need one for an upcoming engagement.
I’ve heard this sort of thing before, so it wasn’t completely new. I didn’t have to zero in on every word, and so I was able to pay attention to other things. Like the fact that I was getting more and more stressed out as this idyllic scene unfolded. It was incongruous, and I started paying attention to why I was feeling increasingly unsatisfied with this utopian vision.
And then it hit me: I’m an introvert. I’m not built for “always on.” At the risk of gross over-simplification, extroverts are always on and appear to enjoy being on. So much so that it’s hard even to call it “on” if there is no “off.” Or at least that’s how it looks to us introverts.
By contrast, we introverts need down time. We turn on when required, but we need an occasional retreat, a breather, where we can be ourselves rather than being the public personae that the world demands, which can be a lot of work.
And we have secrets. Not necessarily dark ones, but we reserve portions of ourselves for ourselves and perhaps for a few others. It’s a little private carve-out we allocate. And for some machine to attempt to plumb that space is an affront, a violation.
Part of it is a desire to maintain some control over some aspects of our lives. Not necessarily to the extent of being a control freak, but simply reserving decisions to ourselves that our computers might aspire to take over. Predictability is anathema.
I remember in high school, during those years when I wanted to be invisible, ordering food from the café. One day the woman said, “Of course… you order the same thing every day.” I hated that. First, she had noticed me – I wasn’t invisible. Second, I was predictable. It made me want to occasionally shake things up just to keep others guessing a bit. Maintain some mystery, perhaps.
Replace that woman with a computer, and, well, you have an always-on context-aware machine that notices that I order the same thing for lunch every day. That doesn’t make me happy. As I’ve noted before, I am somewhat skeptical that context systems can really acquire the nuance necessary to avoid blundering about. I have yet to be convinced that they will realize that all individuals are individual, that what “most people do” isn’t what you or I may want to do, and that “what I usually do” may not be what I want to do right now.
And there’s the suspicion that, all of the promised benefits aside, the real money and motivation are in one field: advertising. In getting information about me to help advertisers drive me to constant consumption. That kind of system doesn’t really need to be as accurate as the kind of predictive system painted by context folks; close is ok for that. So I’ve got all of these eyes watching me for… what, so I can receive better ads?
But even backing out of these details, I think a lot of us would simply like to temper the concept of “always on.” How about “often on”? Or, better yet, “on when I say so”? Don’t “make my decisions for me”; how about “suggesting something and I’ll decide whether that’s what I want”? I would welcome systems like that. Unfortunately, current trends are in the opposite direction.
In the end, when we introverts have had enough of all of this, we can shut it all down, get away into a private place or the desert or the mountains, just us, no one watching, no decisions, taking control just enough to relinquish control to Nature, the final arbiter, both the creator and, ultimately, the destroyer of all we presume to create.