Jul 09, 2015

Motion for User Interfaces

posted by Bryon Moyer

precog-no-back-w-clouds.pngWe’ve looked before at ways of controlling machines with just your hands in the air, like you just don’t care. No touchy-feely, no mouse. Just jazz hands.

So at first, when I saw a demo of what we’re going to talk about today, I thought, “OK… this looks kinda like what I was seeing demonstrated a couple years ago by companies like eyesight and PointGrab.” And yet it also had a flavor of what I’d seen with Movea and Hillcrest, except that their technologies involved remote controls doing what just hands were doing in this case.

But what I was seeing wasn’t either of those technologies at work. Making it more confusing yet, this isn’t about a particular sensing technique – optical, touch, whatever. And yet it is about motion and location. While the announced technology may be brand new, you would probably have to use it to sense the difference. I was watching over a screen, so I frankly had to ask a lot of questions to figure out why this wasn’t just another gesture recognition announcement a few years after all the other ones.

I’m talking about Quantum Interface’s new interface called “Qi*.” It’s a way of taking location information and using changes to model motion – and, in particular, to predict where that motion is going and then turn that into information that a user interface can use. The result is, they say, smoother and faster navigation through user interfaces of any kind. Because of the prediction, you don’t have to “complete” motions as much; a little move in a direction will get you where you want to go faster than if you had to, say, track your hand in front of you.

This notion of only location as an input doesn’t involve any gestures. This is not about specifically identifying a gesture – whether static in your hand shape or a motion pattern that a user has to learn. It’s simply about, say, moving your hand or putting a finger on a surface and letting a well-constructed interface make the next movement obvious. Under the hood, the motion is turned into commands: this is specifically the part Qi does do.

It’s often about navigating menus; you move toward a menu that pops open, and then you settle on (or towards) an item and push your finger towards the screen and it takes you to a next-level menu, and so forth. All more quickly and smoothly than older approaches.

But here’s another subtle part: this is a mid-layer piece of technology. It lives above hardware – it will take location information from any system that can provide it, whether touch or optical (gesture or eye tracking or…) or whatever. It improves with multiple location sensors providing inputs.

It’s also not built into any specific user interface (UI): designers of interfaces can tap the information that Qi provides to drive the interface. Quantum Interface has a fair bit of experience using Qi to build UIs, so they do work with their partners in that realm, but that’s about using Qi; it isn’t Qi itself.

This middleness also makes it system-agnostic: you can create a consistent interface for different app platforms – say, phone, watch, and tablet – and tweak only for the details and resources available on that platform. Somewhat like skinning.

Not sure if I’ve said more about what Qi isn’t than what it is, but both are important since the nuances of what’s new are, well, nuanced. You can find more in their announcement.

 

 

*Regrettably, even given China’s large electronics footprint, where they would pronounce that “chee,” and given the wireless power technology Qi, pronounced “chee,” this is not pronounced “chee”: according to the press release, it’s pronounced like its initials, QI (“cue eye”), even though they opted to make the I lower case…

 

Image courtesy Quantum Interface

Tags :    0 comments  
Jul 07, 2015

Auto-Updating Autos

posted by Bryon Moyer

17954362638_e83162dab1_m.jpgSo you’re merrily toodling around rugged mountain roads – the kind with a 3000-foot cliff up on one side and a 3000-foot cliff down the other side. No room for error. Which is why you’re toodling rather than drifting around those curves. A few inches of margin on each side of the car, and little visibility of other traffic – much of which feels emasculated by toodling. Just gotta keep moving until things straighten out.

And then you see it: a message comes up on your console saying, “Operating system has been updated. Restart required in 10…9…8…”

Thankfully, that’s unlikely to happen. The old PC model of, “Any work you’re doing can’t be as important as updating and restarting RIGHT NOW so stop whining about the unsaved files we trashed” has hopefully been a learning experience for newer devices (even if it’s still the norm on PCs).

But cars are increasingly made of software. According to Movimento, a self-driving vehicle can have 500 million lines of code – 100 times the amount of code in a Boeing 777. And some of it matters a lot – the parts that keep you on the road and in one piece – and other parts less so (the parts that keep the kids happy in the back seat, although some might argue that one without the other is useless).

And, as with all things soft, there will be updates. Heck, that’s part of the reason for using software: so that you can update. But how do you manage updating dozens of electronic control units (ECUs – the mini-embedded systems in the car) implemented by any of many subcontractors, possibly employing any of a number of components that will affect the update?

Movimento has just announced a platform for managing updates, allowing multiple ECUs to be updated individually or at the same time. Sound simple? Not as much as you might think; for example, it has to handle things like analytics or discovery to figure out, in a given car, which of a variety of possible components was used. And what restart requirements, if any, exist for different updates.

That last bit forms something of a spectrum. On one end, they see possible constant little updates to algorithms happening silently as you drive. An experimental example they cite is that, if you hit rain, that information spurs an update in noise cancellation algorithms – in your car and in the cars behind you that have not yet hit the rain. Other updates might require a couple minutes of downtime.

At the extreme end are updates to safety-critical software. Such updates must typically be made by wired connection at the dealership. So they would not be part of an over-the-air update scheme.

Their current focus is the head unit/center stack and the instrument cluster. Supported wireless options, decided at auto design time, can include cellular, Bluetooth, and satellite. The wireless gateway connects to the internal CANbus.

They use a dual-bank approach that keeps a prior version handy for rollback in case there’s an issue with the update. This might let you roll back to a previous version or the original version.

It’s interesting to hear their vision of where intelligent vehicles may lead. Despite the focus a few elections back of an “ownership society,” cars may also evolve into yet another thing we don’t own, but use as a service. To the point where we might have a profile somewhere online that lets us hop into any random vehicle, download the profile, and be whisked away (with the appropriate financial inducements, of course).

That has interesting implications for the carjacking business. And horror movies would no longer feature the car-that-doesn’t-start-when-most-needed. Instead, terrified teenagers would pile into the car and see their lives passing before their eyes as the car complains that it can’t find a connection or that the profile format is corrupted or that their monthly car-as-a-service payment is past due.

IoT-style analytics streaming out to the cloud, meanwhile , are likely to focus more on system health than on user behavior. Even though user behavior seems to be one of the earliest use models, a la insurance companies. I guess that’s part of the nuance here: the automakers don’t care what you’re personally doing. Other companies, however, could be a different story.

You can read more in their announcement.

 

(Image courtesy Movimento. Click to enlarge.)

Tags :    0 comments  
Jul 06, 2015

Goodbye Robert Dewar, Gary Smith

posted by Dick Selwood

 

In the last few days we have heard of the death of two major players.

The first is Robert Dewar, one of the towering figures of software in every sense of the word. As well as being an outstanding computer scientist, being involved in language design, and compiler design – particularly the GNAT compiler for ADA, he was also a businessman, founding ADACore, and an expert on the way copyright and patents affected software. He was a great evangelist for FLOSS – freely licensed open source software. I wrote about his views five years ago,  License to bill and can still remember the hour long phone call as though it were yesterday.

The second is Gary Smith, a fount of knowledge on EDA.  After Dataquest, where he was Managing Vice President and Chief Analyst of the Electronic Design Automation Service, Design & Engineering Cluster, pulled out of examining EDA, he started Gary Smith EDA as a consulting and analysis company, and built it up to become the first point of call for data and trends about EDA and its changes. An open and friendly person, I remember one DAC in San Diego where he spent a long taxi ride reminiscing about being in the Navy at the time of Viet Nam.

The world is now a poorer and emptier place.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register