feature article
Subscribe Now

Context Matters

But Determining Context is Hard

Computing engines have always been considered pretty dumb. Even when they’re in our smart phones. The classic articulation of this is the fact that they do exactly what we tell them to do; the problem is that this may not be exactly what we want them to do.

A simple phone example is the infuriating obedience that a phone pays to moving to a landscape orientation when the phone goes sideways – even when we’re lying on our sides trying to read the thing in portrait mode. If the phone were able to deal with the fact that orientation isn’t really with respect to ground, but with respect to our head (head orientation being part of the viewing context), then the phone could make more intelligent decisions. But, as it is, we say “flip when the phone is sideways,” and it obeys unthinkingly.

An older attempt to deal with context – and a popular poster child for getting context wrong – was (or is?) Microsoft’s Clippie. You know, the annoying little dude that watches your actions, decides he knows what you’re doing and what you want, gets it completely wrong, and then winks with a self-satisfied smile while you try to take a soldering gun to his smug little wiry butt.

But context is back in vogue again; Sensor Platforms had a presentation at the Sensors Expo pre-symposium put on by the MEMS Industry Group. And I have to say, I’ve had a devil of a time deciding how context is fundamentally different from other similar technologies – especially since Sensor Platforms is proposing including a context hub in low-power systems, so it’s not an academic discussion. (More on the hub thing in a minute.)

Context is more than sensors and data

At first blush, determining context can sound like an elaborate form of sensor fusion if you define sensors very broadly. I mean, really, all you’re doing is bringing all kinds of observational data together. So, for instance, if a front-side camera sees that you’re on your side, then the phone can use that information to decide not to flip the screen orientation. That’s just fusing data from another sensor to make a more sophisticated decision. Or is that context?

We’re also not restricted to data generated by the device we’re using; we can get data from other devices – like fitness-monitoring gizmos – or other data in the cloud. But this starts to sound like what is already being done with respect to location: merging data from sensors and maps and GPS in one monstrous algorithm.

That’s often referred to as “data fusion,” as distinct from sensor fusion, since much of the data doesn’t come from a sensor; instead it might come from the cloud or an embedded map. So sensor and data fusion are already out there; why isn’t that the same as context?

The difference, at least as I see it, is that context is about deriving meaning from a restricted set of data based on other data – and it drives action. Sensor and data fusion are about simply finding a better answer to a specific factual question like, “Where am I?” By combining data sources, a better answer can be found. Context takes that one step further, potentially using that location information as a part of your context. For instance, if your phone knew that you were inside a movie theater, then it might not ring your phone.

Data fusion can play into this as well: if your phone could also get access to the movie schedule, then it might ring your phone if it knew that the movie wasn’t playing yet, but suppress the ring during the movie.

Tying this to my context definition, the incoming call is the main event or “restricted data” of interest for this example, and the phone would normally salute and ring, period. By looking for “meaning,” it conditions that action on other inputs like your location and the movie schedule, answering the question, “Does the person want to know right now that there’s an incoming call?”

Fundamentally, this is more about taking the right action than just answering a data question. It’s about getting away from having your computing engine do what you told it to do and, instead, having, it do what you really want it to do. Sounds most excellent – except that figuring out what you really want is really tricky.

Context is messy

Big companies are spending lots of dollars on algorithms that sort through information to figure something out. Heck, just today Google reorganized my Gmail inbox, adding categories (ones that it came up with, of course, not something that I can control). And how it decided to do the sorting is completely beyond me; as far as I could tell, a monkey could have done as effective a job by tossing random emails in different boxes. Apparently Google is happy to look like they’re doing something useful when, in fact, they’re not. Clippie redux. (Good thing I rarely use Gmail… so I really don’t care.)

Other companies seem to be waiting until they actually get it right to release it. Which is to say, there’s not a lot of real context being done yet (to my knowledge). Part of that is simply because it is so hard. Our brains can take what we see and hear and feel and remember, and they instantly create a picture of what’s going on (although even this isn’t infallible). Computers can’t work quite that way; they need to have values given to a wide range of attributes in order to define context.

And it seems to me like that will make this a bit of a jungle for a while. The problem is that the number of attributes to consider for any given problem and the values they are allowed to take on are not well defined. Let’s say your tablet is trying to decide how bright to make the screen. It should take the ambient light into account – and probably already does. But should it consider whether you’re inside or outside? Whether you’re in bed with someone asleep next to you that might wake up with a bright light? Whether you’re in an airplane where someone next to you might be snooping? Does this mean that we need attributes for ambient light level, inside/outside, possible light disturbance, and tight seating? Someone else might want to take into account whether you’re in a theater – which is a completely different attribute. I’m sure there are many other such attributes that could be included.

Companies that compete based on the quality of their context will obviously have to have the best algorithms. But, at least early on, they may consider widely differing sets of inputs. They may each have different values that can be ascribed to the different variables. With sensor fusion, you have a fixed number of sensors providing well-defined data, and the differentiation comes from the algorithms. With context, everything is up in the air. You can always think of some other variable to include.

Context is lots of (public?) data about you

The other new wrinkle that this adds to our phones and computers is the fact that they need to be constantly aware of what’s happening. Context isn’t simply about what’s happening at this snapshot in time; it may also require some accumulated history. Which means that it needs to be constantly gathering data so that some future decision can draw from that history.

That raises a number of questions. One obvious one is power: sensors must constantly be on, just in case, as pointed out in the presentation. It might be possible to define a hierarchy of always-on sensors if we’re lucky enough that low-power sensors can do most of the watching, turning on other sensors when they think something interesting is happening.

But this is also where the context hub concept comes from, and it’s completely analogous to the sensor hub notion. The point is to provide a lower-power way of calculating context all the time, alerting or waking a higher-power processor only when there’s something of interest that it needs.

But this also means accumulating large quantities of data on the off chance that some future context decision might need it. Should that be stored locally? In the cloud? How long should it be kept? Should it be reduced for a smaller storage footprint based on the context algorithms known to be on the phone? How easy is that to change if a phone upgrade changes the context algorithms?

The storage decision also ties into another consideration: privacy. Who gets to see and control how this all works? If you save local storage by sending data to the cloud, then many current cloud models force you to share that data (or they make it really hard for you to turn off access). Will people be willing to pay for cloud storage that Google or Microsoft can’t rummage through? Can someone tap the communications? Does using the cloud mean that Uncle Sam will now get to watch what you’re doing?

Even if you keep the data local on your phone, does that make it private? In reality, it’s not your phone: it’s your carrier’s phone, and they’re nice enough to let you use it. But they get access to everything on that phone. As might others: ever install the Facebook app? Did you read what they get access to? Like, everything? All your contacts and calendar and everything else? (Yeah… I discontinued the installation at that point.)

Of course, you could well argue that giving Facebook access to all that information will allow them to make better decisions about what you want while using Facebook. It’s just far more likely that they’ll be using it to sell to others and to target advertising. Is there a tradeoff there? Less random ads in exchange for… a better experience? (So far, all the focus is on the ads… the other part of the bargain, the part where we benefit, hasn’t materialized yet…)

Even the simple notion that you’re being watched all the time by your devices and sensors can be unsettling. Yeah, yeah, if we’re not doing anything wrong, then we have nothing to hide or worry about. If you really believe that, then you won’t mind me following your every step with a video camera constantly running. (I’d love to propose that people follow Homeland Security leaders with video cameras using the same logic they use to justify monitoring us…)

So… where am I going with all of this? I guess I can sum it up by saying that context sounds good – if:

  • It gets it right
  • It works for more than a few well-behaved problems
  • It’s general enough to accept updates
  • It doesn’t affect the power/performance of the device
  • I get some control over how it works
  • I get complete control over who gets to see the data

In reality, it’s going to take some convincing before I’m bought off. I’ll be looking for some specific early examples to see how they work; if and when that happens, my goal is to follow up with that detail; we can then decide whether those criteria are being met.

In the meantime, I’m kind of okay with my computer and phone being relatively dumb. Never thought I’d say that…

One thought on “Context Matters”

Leave a Reply

featured blogs
Mar 28, 2024
'Move fast and break things,' a motto coined by Mark Zuckerberg, captures the ethos of Silicon Valley where creative disruption remakes the world through the invention of new technologies. From social media to autonomous cars, to generative AI, the disruptions have reverberat...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Achieving High Power Density with IGBT and SiC Power Modules
Sponsored by Mouser Electronics and Infineon
Recent trends in the inverter market have made high power density, scalability, and ease of assembly more important than ever before. In this episode of Chalk Talk, Amelia Dalton and Abraham Markose from Infineon examine how Easy & Econo power modules from Infineon can help solve common inverter design requirements. They explore the benefits and construction of these modules and how you can take advantage of them in your next design.
May 19, 2023
34,600 views