feature article
Subscribe Now

Context Matters

But Determining Context is Hard

Computing engines have always been considered pretty dumb. Even when they’re in our smart phones. The classic articulation of this is the fact that they do exactly what we tell them to do; the problem is that this may not be exactly what we want them to do.

A simple phone example is the infuriating obedience that a phone pays to moving to a landscape orientation when the phone goes sideways – even when we’re lying on our sides trying to read the thing in portrait mode. If the phone were able to deal with the fact that orientation isn’t really with respect to ground, but with respect to our head (head orientation being part of the viewing context), then the phone could make more intelligent decisions. But, as it is, we say “flip when the phone is sideways,” and it obeys unthinkingly.

An older attempt to deal with context – and a popular poster child for getting context wrong – was (or is?) Microsoft’s Clippie. You know, the annoying little dude that watches your actions, decides he knows what you’re doing and what you want, gets it completely wrong, and then winks with a self-satisfied smile while you try to take a soldering gun to his smug little wiry butt.

But context is back in vogue again; Sensor Platforms had a presentation at the Sensors Expo pre-symposium put on by the MEMS Industry Group. And I have to say, I’ve had a devil of a time deciding how context is fundamentally different from other similar technologies – especially since Sensor Platforms is proposing including a context hub in low-power systems, so it’s not an academic discussion. (More on the hub thing in a minute.)

Context is more than sensors and data

At first blush, determining context can sound like an elaborate form of sensor fusion if you define sensors very broadly. I mean, really, all you’re doing is bringing all kinds of observational data together. So, for instance, if a front-side camera sees that you’re on your side, then the phone can use that information to decide not to flip the screen orientation. That’s just fusing data from another sensor to make a more sophisticated decision. Or is that context?

We’re also not restricted to data generated by the device we’re using; we can get data from other devices – like fitness-monitoring gizmos – or other data in the cloud. But this starts to sound like what is already being done with respect to location: merging data from sensors and maps and GPS in one monstrous algorithm.

That’s often referred to as “data fusion,” as distinct from sensor fusion, since much of the data doesn’t come from a sensor; instead it might come from the cloud or an embedded map. So sensor and data fusion are already out there; why isn’t that the same as context?

The difference, at least as I see it, is that context is about deriving meaning from a restricted set of data based on other data – and it drives action. Sensor and data fusion are about simply finding a better answer to a specific factual question like, “Where am I?” By combining data sources, a better answer can be found. Context takes that one step further, potentially using that location information as a part of your context. For instance, if your phone knew that you were inside a movie theater, then it might not ring your phone.

Data fusion can play into this as well: if your phone could also get access to the movie schedule, then it might ring your phone if it knew that the movie wasn’t playing yet, but suppress the ring during the movie.

Tying this to my context definition, the incoming call is the main event or “restricted data” of interest for this example, and the phone would normally salute and ring, period. By looking for “meaning,” it conditions that action on other inputs like your location and the movie schedule, answering the question, “Does the person want to know right now that there’s an incoming call?”

Fundamentally, this is more about taking the right action than just answering a data question. It’s about getting away from having your computing engine do what you told it to do and, instead, having, it do what you really want it to do. Sounds most excellent – except that figuring out what you really want is really tricky.

Context is messy

Big companies are spending lots of dollars on algorithms that sort through information to figure something out. Heck, just today Google reorganized my Gmail inbox, adding categories (ones that it came up with, of course, not something that I can control). And how it decided to do the sorting is completely beyond me; as far as I could tell, a monkey could have done as effective a job by tossing random emails in different boxes. Apparently Google is happy to look like they’re doing something useful when, in fact, they’re not. Clippie redux. (Good thing I rarely use Gmail… so I really don’t care.)

Other companies seem to be waiting until they actually get it right to release it. Which is to say, there’s not a lot of real context being done yet (to my knowledge). Part of that is simply because it is so hard. Our brains can take what we see and hear and feel and remember, and they instantly create a picture of what’s going on (although even this isn’t infallible). Computers can’t work quite that way; they need to have values given to a wide range of attributes in order to define context.

And it seems to me like that will make this a bit of a jungle for a while. The problem is that the number of attributes to consider for any given problem and the values they are allowed to take on are not well defined. Let’s say your tablet is trying to decide how bright to make the screen. It should take the ambient light into account – and probably already does. But should it consider whether you’re inside or outside? Whether you’re in bed with someone asleep next to you that might wake up with a bright light? Whether you’re in an airplane where someone next to you might be snooping? Does this mean that we need attributes for ambient light level, inside/outside, possible light disturbance, and tight seating? Someone else might want to take into account whether you’re in a theater – which is a completely different attribute. I’m sure there are many other such attributes that could be included.

Companies that compete based on the quality of their context will obviously have to have the best algorithms. But, at least early on, they may consider widely differing sets of inputs. They may each have different values that can be ascribed to the different variables. With sensor fusion, you have a fixed number of sensors providing well-defined data, and the differentiation comes from the algorithms. With context, everything is up in the air. You can always think of some other variable to include.

Context is lots of (public?) data about you

The other new wrinkle that this adds to our phones and computers is the fact that they need to be constantly aware of what’s happening. Context isn’t simply about what’s happening at this snapshot in time; it may also require some accumulated history. Which means that it needs to be constantly gathering data so that some future decision can draw from that history.

That raises a number of questions. One obvious one is power: sensors must constantly be on, just in case, as pointed out in the presentation. It might be possible to define a hierarchy of always-on sensors if we’re lucky enough that low-power sensors can do most of the watching, turning on other sensors when they think something interesting is happening.

But this is also where the context hub concept comes from, and it’s completely analogous to the sensor hub notion. The point is to provide a lower-power way of calculating context all the time, alerting or waking a higher-power processor only when there’s something of interest that it needs.

But this also means accumulating large quantities of data on the off chance that some future context decision might need it. Should that be stored locally? In the cloud? How long should it be kept? Should it be reduced for a smaller storage footprint based on the context algorithms known to be on the phone? How easy is that to change if a phone upgrade changes the context algorithms?

The storage decision also ties into another consideration: privacy. Who gets to see and control how this all works? If you save local storage by sending data to the cloud, then many current cloud models force you to share that data (or they make it really hard for you to turn off access). Will people be willing to pay for cloud storage that Google or Microsoft can’t rummage through? Can someone tap the communications? Does using the cloud mean that Uncle Sam will now get to watch what you’re doing?

Even if you keep the data local on your phone, does that make it private? In reality, it’s not your phone: it’s your carrier’s phone, and they’re nice enough to let you use it. But they get access to everything on that phone. As might others: ever install the Facebook app? Did you read what they get access to? Like, everything? All your contacts and calendar and everything else? (Yeah… I discontinued the installation at that point.)

Of course, you could well argue that giving Facebook access to all that information will allow them to make better decisions about what you want while using Facebook. It’s just far more likely that they’ll be using it to sell to others and to target advertising. Is there a tradeoff there? Less random ads in exchange for… a better experience? (So far, all the focus is on the ads… the other part of the bargain, the part where we benefit, hasn’t materialized yet…)

Even the simple notion that you’re being watched all the time by your devices and sensors can be unsettling. Yeah, yeah, if we’re not doing anything wrong, then we have nothing to hide or worry about. If you really believe that, then you won’t mind me following your every step with a video camera constantly running. (I’d love to propose that people follow Homeland Security leaders with video cameras using the same logic they use to justify monitoring us…)

So… where am I going with all of this? I guess I can sum it up by saying that context sounds good – if:

  • It gets it right
  • It works for more than a few well-behaved problems
  • It’s general enough to accept updates
  • It doesn’t affect the power/performance of the device
  • I get some control over how it works
  • I get complete control over who gets to see the data

In reality, it’s going to take some convincing before I’m bought off. I’ll be looking for some specific early examples to see how they work; if and when that happens, my goal is to follow up with that detail; we can then decide whether those criteria are being met.

In the meantime, I’m kind of okay with my computer and phone being relatively dumb. Never thought I’d say that…

One thought on “Context Matters”

Leave a Reply

featured blogs
Apr 24, 2024
Diversity, equity, and inclusion (DEI) are not just words but values that are exemplified through our culture at Cadence. In the DEI@Cadence blog series, you'll find a community where employees share their perspectives and experiences. By providing a glimpse of their personal...
Apr 23, 2024
We explore Aerospace and Government (A&G) chip design and explain how Silicon Lifecycle Management (SLM) ensures semiconductor reliability for A&G applications.The post SLM Solutions for Mission-Critical Aerospace and Government Chip Designs appeared first on Chip ...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...

featured video

MaxLinear Integrates Analog & Digital Design in One Chip with Cadence 3D Solvers

Sponsored by Cadence Design Systems

MaxLinear has the unique capability of integrating analog and digital design on the same chip. Because of this, the team developed some interesting technology in the communication space. In the optical infrastructure domain, they created the first fully integrated 5nm CMOS PAM4 DSP. All their products solve critical communication and high-frequency analysis challenges.

Learn more about how MaxLinear is using Cadence’s Clarity 3D Solver and EMX Planar 3D Solver in their design process.

featured paper

Designing Robust 5G Power Amplifiers for the Real World

Sponsored by Keysight

Simulating 5G power amplifier (PA) designs at the component and system levels with authentic modulation and high-fidelity behavioral models increases predictability, lowers risk, and shrinks schedules. Simulation software enables multi-technology layout and multi-domain analysis, evaluating the impacts of 5G PA design choices while delivering accurate results in a single virtual workspace. This application note delves into how authentic modulation enhances predictability and performance in 5G millimeter-wave systems.

Download now to revolutionize your design process.

featured chalk talk

Addressing the Challenges of Low-Latency, High-Performance Wi-Fi
In this episode of Chalk Talk, Amelia Dalton, Andrew Hart from Infineon, and Andy Ross from Laird Connectivity examine the benefits of Wi-Fi 6 and 6E, why IIoT designs are perfectly suited for Wi-Fi 6 and 6E, and how Wi-Fi 6 and 6E will bring Wi-Fi connectivity to a broad range of new applications.
Nov 17, 2023
20,865 views