Jan 05, 2015

Shootout at the FinFET Corral

posted by Bryon Moyer

It’s high noon at IEDM. Both Intel and IBM have “late-breaking news” with their 14-nm FinFET numbers. The giant room is filled to bursting capacity. I’m lucky enough to have some space along the side wall, far from the screen. So far, in fact, that much of what’s on the screen is completely illegible.

Oh, and did I mention photography is not allowed? So… you can’t see the information, you can’t record it even if you saw it… you could busily write what little you can see but then you’re not listening… Oh well, the paper is in the proceedings and I should be able to get the slides after the fact. Right?

Nope. IBM politely declined. Intel didn’t respond at all. (Good thing the proceedings have contact information…) So if the paper is the only record of what happened, then why bother with the presentation? Except for those in the center of the room…

Yeah, I was frustrated, since, in these presentations, you can get a better sense of context and perspective, but only if you have a photographic memory. And I don’t. (And getting less so with each day.) There were definitely points that were made in the presentations that are not in the paper… so I can’t report them.

The whole deal here is Intel’s 14-nm bulk-silicon process vs. IBM’s 14-nm SOI process. And here’s the major takeaway: cost and performance have improved. Moore’s Law, reported as dead at the leading nodes, has taken a few more breaths. It’s just like the good old days, where area shrunk enough to make up for increased costs, and performance gained substantially.

I was going to compare some numbers here, but it’s too spotty to find numbers that they both reported in their papers. For instance, IBM reports a 35% performance improvement over 22 nm; as far as I can tell, Intel reported a performance improvement in the presentation, but didn’t put it in the paper. (I assume that’s intentional.)

Some notable process points:

  • IBM
    • Has a dual-work-function process that allows optimizing both low- and high-VT devices without resorting to doping. No details provided on that process.
    • 15 layers of copper
    • Includes deep-trench embedded DRAM.
  • Intel
    • Uses sub-fin doping.
    • Fin is now much more rectangular than their last edition.
    • 13 interconnect layers
    • They use air-gapped interconnects: pockets of air between lines on select metal layers that reduce capacitance by 17%. They were not willing to discuss how they do the air-gapping, just that they do.
    • Their random variation for VT, which grew from node to node for many nodes, is almost down to where it was at the 90-nm node.

Select images data follow…

[Suggestion to IEDM: require that presentations be made available. They shouldn’t be presenting material if they don’t have the cojones to stand behind it after the presentation…]

Cross-sections:

IBM:

IBM_photo.png

Intel:

Intel_photo.png

 

Pitches:

IBM:

 IBM_table.png

 Intel:

Intel_table.png


Transistor performance:

IBM:

IBM_xstor.png

Intel:

Intel_xstor.png

All images courtesy IEDM.

Tags :    0 comments  
Dec 30, 2014

Gestures Stalling?

posted by Bryon Moyer

The Touch Gesture Motion conference (TGM) covers various technologies related to up-and-coming human-machine interface approaches. And its middle name is “Gesture.” How we doin’ there?

Well, first off, some of the consistent names in gesture – regular faces in past years – were not present this year. That caught my eye. And then there was an interesting presentation providing evidence that consumers aren’t delighted with gesture technology. Another red flag.

So let’s look at some evidence and then go over some of the challenges that gesture technology may need to overcome.

I personally only have one piece of evidence, which, scientifically, would be considered not evidence, but an anecdote. I wrote about it before: answering a phone call overlapped with a hang-up gesture. Yeah, you can see where that went.

But there’s another source: a company called Argus Insights monitors… um… well, online social discussion. And they intuit from that how people are feeling. Note that this doesn’t really provide information on why folks are reacting the way they are; it simply provides the reaction.

They get this by mining the social media buzz surrounding various products. They check not only the amount of discussion, but they also characterize whether it’s positive or negative. For instance, they found that the Samsung Galaxy S3 started with a 0.75 “delight” rating, but the S4 had a rather rocky debut, starting as low as 0.25 and eventually crawling up to about 0.70 or so. Later, the S5 nailed it at around 0.85 or so out of the chute, declining to around 0.8.

Depending on how they mine this stuff, they extract information on different aspects of technology. I’m not privy to the details of how they do the extraction (if they were my algorithms, I certainly wouldn’t make them public), so I can’t swear as to the accuracy, but folks are listening.

And here’s what Argus says about gestures: consumers are not thrilled. The following chart shows consumer reaction to touchscreens, touchscreen responsiveness specifically, and gesture recognition – and the latter shows a pretty dramatic dropoff.

Gesture_falling.png

Click to enlarge. Graph courtesy Argus Insights

While this data doesn’t provide cause, other presentations and discussions from the conference can shed some light. In fact, it’s not easy to see why it might be a problem.

John Feland, cofounder and CEO of Argus Insights, related one incident where he was consulting with a system house, and they declared, “We should assemble a vocabulary of 35 gestures!” as a response to other systems having growing gesture vocabularies. As if the number of gestures defined success. As you might imagine, Mr. Feland advised against that.

Why? Because who wants to memorize 35 gestures? OK, perhaps it’s possible – if we, as a culture, standardize on gestures and start teaching kids at an early age, the way we do typing keyboarding today. It becomes ingrained and we carry it with us the rest of our lives.

But that’s not what’s happening. Each system maker has its own vocabulary. Those vocabularies are enabled, typically, by separate companies specializing in gesture technology. Those providers each have different vocabularies. And those vocabularies sometimes relate to the technology used to see the gestures. Is it ultrasound? A camera? What camera technology?

So it’s not a matter simply of learning 35 gestures. In fact, let’s drop the issue of too many gestures; let’s assume there are, oh, eight. That’s not many – especially with symmetries (up/down/left/right are probably – hopefully – analogous). But if you have two tablets in the house and three phones and an entertainment system, each of which has eight gestures, and they’re all a different set of eight gestures, then you have to remember for each system which gestures do what. Kids, with their annoying plastic minds, can probably do that. Adults? Not so  much. (OK, we could. But we’re old enough to have other things to do with our time and gray matter.)

Of course, the solution is to standardize on eight gestures to be implemented throughout the industry. Yeah, you can imagine how fun that discussion would be. In addition to picking the eight, you’d also want to be culturally sensitive, meaning a different eight for different cultures, meaning also defining which cultures get their own and where the boundaries will be. Great rollicking fun for the entire family to watch if UFC isn’t on at the moment.

And it’s not just the gestures themselves. There are also… what to call them… framing issues. How do you end one gesture and start another? One system might do gestures all in a single plane; in that case, pulling your hand back towards you could be interpreted as ending a gesture. But another system might use a pulling-towards-you gesture for zooming, with some other way of indicating that the gesture is complete.

My own observation is that gesture technology has largely been viewed as a cool thing to bolt onto systems. And let’s be clear on this: it is cool. At least I think it is. That simple cameras or other devices can watch our hands and sort out what we’re doing in complicated scenes and settings is really amazing.

But it also feels like we’ve added them to systems in an, “Isn’t this cool??” manner instead of an, “Isn’t this useful??” way. And consumers like cool for only so long, after which they get bored – unless it’s also useful. Which would be consistent with higher satisfaction early and then a drop off.

Probably the biggest question ends up being, is it useful enough to generate revenues that will fund the further development and refinement of the technology? That value question has also not been unambiguously decided one way or the other.

So there are lots of data points here; they all suggest that there’s more to be done. I’ll leave it to the participants in this battle to decide the best fixes… or you can add your own thoughts below.

Tags :    0 comments  
Dec 18, 2014

IoT Business Objects

posted by Bryon Moyer

We do this thing here where we try to take occasional stock of the structure of the Internet of Things (IoT) to try to make sense out of the various pieces that come together to work or compete with each other. And I usually try to generalize or abstract some of the mess into some broader structure that’s hopefully easier to parse (or becomes an easier entry point).

We did that a while ago when looking briefly at Xively. Well, another opportunity came about when I was contacted by a company called Zebra regarding their IoT infrastructure offering called Zatar (not sure if that comes from za’atar [that apostrophe representing an unusual pharyngeal unrepresentable in Latin script], which would give it a flavorful veneer). And my usual first question is, “Where does this fit in the high-level scheme of things?”

Zatar would appear to implement business objects, although they use a different vocabulary, referring to their abstractions of devices as “avatars.” So they would appear to play at a higher level than, say, Xively. As with any high-level entity, however, it’s built on a stack below it. One of the top-level supporting protocols they use is OMA’s Lightweight M2M protocol (LWM2M).

I did some brief digging into LWM2M, and I’m glad they have a whitepaper, because they don’t have a single protocol doc. They have a collection of chapters (dozens of them) all sorted in alphabetical order, so it’s really tough to tell which (if any) is a top-level document from which to get started. I may dig into this protocol more in the future.

But, at a high level, with Zatar and LWM2M, I’m refining how I think of the “business objects” layer. In general, this layer is where specific object semantics exist: thermostats vs. door locks vs. washing machines. Below it, only generic messages exist, with meaning that’s opaque to the protocol.

It appears that LWM2M enables the notion of an object without standardizing specific objects. So it lets you create an abstract entity and give it properties or interactions – essentially, an API – without saying what the specifics should be.

Zatar comes pre-equipped with a base avatar from which users can define their own specific ones. This is done without any explicit coding. By contrast, other folks (like Ayla Networks, from a while back) include pre-defined objects. So I’ve split the “business objects” concept into two layers: generic and specific. The generic layer simply enables the concept of a business object; the specific layer establishes the details of an object.

So, for instance, given a generic capability, three lighting companies could go and define three different models or objects representing lighting, each of which would adhere to the generic protocol. If someone wanted to standardize further – say office management folks got tired of having to figure out which lighting protocol various pieces of equipment followed – then someone could go further and standardize a single lighting protocol; this would be a specific standard.

It’s important to keep in mind, however, that LWM2M is a protocol standard, while Zatar is not; it’s a product that implements or builds over that and other protocols.

Biz_object_drawing.png

The other thing that Zatar has is an enterprise focus. We’ve peeled apart a bit the notions of the consumer IoT vs. the industrial IoT, but the notion of yet a third specialized entity, the enterprise IoT, is something I haven’t quite come to grips with. Part of it is simply a matter of scale – large entities with lots of data that has to be shared globally. This bears further investigation; watch these pages over the next few months for more on that.

One other last point: saying that these products and standards simply implement business objects is a gross over-simplification. As you can see if you go browse the OMA docs or even with the following figure from Zebra, there are many, many details and supporting services and applications that get wrapped up in this. For LWM2M, in includes lower-level concepts of interaction through various networking media and how, for instance, browsers should behave. For Zatar, there’s the cloud service and other applications. I’m almost afraid to try to abstract some of this underlying detail. We’ll see…

Zatar_figure.png

Meanwhile, you can learn more about the specifics of Zatar here; you can learn more about OMA’s LWM2M protocol here.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register