Oct 24, 2014

If This Is a Conference, then It Must Be November

posted by Bryon Moyer

Picture_250.jpgNovember would appear to be Conference Month.

Of course, no one has conferences in the summer. Oh, wait! Semicon West does! OK, well, most folks assume no one is home during summer, so they wait until fall. Can’t do December because of the holidays… September, well, everyone is getting back from summer… October? Yeah, a few sprinkled here and there. And then there’s November.

Conferences are a dime a dozen these days. Some are big single-company affairs (Intel Developer’s Forum, ARM TechCon, for example). Others are put on by organizations, some venerable, some newly sprouted, having decided that events can be lucrative.

So, given all the conferences, which ones to go to? Your interests and mine should be broadly aligned, since you’re looking for new technology to help with your work and I’m looking for interesting stories about new technology that will help you with your work. Given all of the overlapping conferences, I’ve been pretty choosy about which ones to attend. That’s not to say that everything I’ve declined is not worthwhile; it’s more that I’m expecting a few of them to be particularly worthwhile.

Here’s what I’m looking forward to over the next several weeks.

  • Touch-Gesture-Motion in Austin 10/29-10/30: this has been my go-to event for, well, touch, gesture, and motion interface technologies. Put on by IHS, it normally ends up being a good two-day overview of what’s happening in those industries. It’s why most of my touch and gesture pieces come out in the December/January timeframe.
  • ICCAD in San Jose 11/2-11/6: this IEEE/ACM-sponsored EDA conference seems to have picked up steam over the last couple years. It seems to be the second node where EDA folks focus some announcement attention.
  • The MEMS Executive Congress in Scottsdale 11/5-11/7: This is the annual who’s-who confab of the MEMS industry, put on by the MEMS Industry Group. While there are MEMS- or sensor-related shows sprinkled throughout the year, this is a higher-level view, and it also becomes a focal point for announcements. Amelia and Kevin will also be here.
  • TSensors in San Diego 11/12-11/13: Organized by MEMS veteran Dr. Janusz Bryzek, this is the follow-on to the initial TSensors meeting of last year. There have been other ones since then in different parts of the world; I believe this to be the flagship session where we’ll get the latest on efforts to bolster the sensor market.
  • IDTechEx in Santa Clara 11/19-11/20: IDTechEx is actually the organizing entity, but it’s easier to say that than to name all of the collocated conferences happening during those two days. My focus will be in Energy Harvesting and Internet of Things, but there will also be sessions on wearable electronics, printed electronics, supercaps, graphene, and 3D printing.
  • IEDM in San Francisco 12/15-12/17: This IEEE conference is the go-to event for transistors and other basic devices. No tradeshow this: it’s serious technology, not for the faint of heart. If you think you really know your stuff, then come here for an instant humbling. Apparently this year will feature, among other things, a face-off between Intel (FinFET on bulk) and IBM (FinFET on SOI). I can hardly wait!

If you’re there and notice me lurking in the shadows, don’t hesitate to say hello.

Tags :    0 comments  
Oct 23, 2014

Elliptic Labs’ 3rd Gesture Dimension

posted by Bryon Moyer

Some time back we briefly introduced Elliptic Labs’ ultrasound-based gesture technology. They’ve added a new… layer to it, shall we say, so we’ll dig in a bit deeper here.

This technology is partially predicated on the fact that Knowles microphones, which are currently dominant, can sense part of the ultrasonic range. That means you don’t necessarily need a separate microphone to include an ultrasound gesture system (good for the BOM). But you do need to add ultrasound transmitters, which emit the ranging signal. They do their signal processing on a DSP hub, not on the application processor (AP) – important, since this is an always-on technology.

With that in place, they’ve had more or less a standard gesture technology, just based on a different physical phenomenon. They see particular advantage for operation in low light (where a camera may be blind), full sun (which can also blind a camera), and where power is an issue: they claim to use 1/100th the power of a camera-based gesture system. So… wearables, anything always-on. As long as you don’t need the resolution of a camera (which, apparently, they don’t for the way they do gestures), this competes with light-based approaches.

Image_240.jpg

What they’ve just announced is the addition of a 3rd dimension: what they’re calling multi-layer interaction (MLI). It’s not just the gesture you perform, but how far away from the screen you perform it. Or what angle you are from the screen.

For instance, starting from far away, with your hand approaching, at one point it would wake up. Come in further and it will go to the calendar; further still takes you to messages; and finally on to email. Of course, Elliptic Labs doesn’t define the semantics of the gestures and positions; an equipment maker or application writer would do that.

And it strikes me that, while this adds – literally – a new dimension to the interface, the semantic architecture will be critical so that users don’t have to mentally map out the 3D space in front of their screen to remember where to go for what. There will have to be a natural progression so that it will be “obvious.” For example, if you’re gotten to the point of email, then perhaps it will show the list of emails, you can raise and lower your hand to scroll, and then go in deeper to open a selected email. Such a progression would be intuitive (although I use that word advisedly).

A bad design might force a user to memorize that 1 ft out at 30 degrees left means email and at 30 degrees right means calendar and you open Excel with 90 degrees (straight out) 2 ft away and… and… A random assignment of what’s where that has to be memorized would seem to be an unfortunate design. (And, like all gesture technologies, care has to be taken to avoid major oopses…)

Note that they don’t specifically detect a hand (as opposed to some other object). It’s whatever’s out there that it registers. You could be holding your coffee cup; it would work. You could be using your toes or a baseball bat; it would work.

You can also turn it off with a simple gesture so that, for example, if you’re on your phone gesticulating wildly, you don’t inadvertently do something regrettable in the heat of phone passion. Or in case you simply find it annoying.

You can find out more in their announcement.

 

(Image courtesy Elliptic Labs)

Tags :    0 comments  
Oct 21, 2014

What Does ConnectOne’s “G2” Mean?

posted by Bryon Moyer

ConnectOne makes WiFi modules. And they recently announced a “G2” version. Being new to the details of these modules, I got a bit confused by the number of products bearing the “G2” label as well as the modes available – were they all available in one module, or were different modules for different modes? A conversation with GM and Sales VP Erez Lev helped put things in order.

As it turns out, you might say that ConnectOne sells one WiFi module into multiple form factors. Of the different modules I saw, it was the form factor – pins vs. board-to-board vs. SMT; internal vs. external antenna – that was different, not the functionality.

There are multiple modes that these modules can take on – and these are set up using software commands that can be executed in real time. So this isn’t just a design-time configuration; it can be done after deployment in the field.

The modes available are:

-          Embedded router

-          Embedded access point

-          LAN to WiFi bridge

-          Serial to LAN/WiFi bridge

-          Full internet controller

-          PPP emulator

But what about this “G2” thing? Their first-generation modules were based on Marvell’s 8686 chip. And that chip has been end-of-lifed. Or, perhaps better said, it’s been 86ed. So in deciding where to go next, they settled on a Broadcom baseband chip – something they said gave Broadcom a boost in an area they’re trying to penetrate.

G2N2_Top_and_bottom_400.png

But the challenge was in making this change transparent to users. Existing software invokes the new chip just like it did the old one, and this took a fair bit of work. They say they were successful, however, so that upgrading from the older to the newer version takes no effort; it just plugs in.

So “G2” reflects this move to the Broadcom chip as their 2nd-generation module family. From a feature standpoint, the big thing it gets them is 802.11n support. But they also have a number of unexposed features in their controller. Next year they’ll be announcing a “G3” version, with higher performance and… well, he didn’t share all of what’s coming. But G3 will have all of the same pinouts, form factors, APIs, etc. for a seamless upgrade from G2 (or G1, for that matter).

You can get more detail in their announcement.

 

Image courtesy ConnectOne

Tags :    0 comments  
Get this feed  
« Previous123456...185Next »

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register