editor's blog
Subscribe Now

Separating You from Your Phone

In high-school physics class, we did an experiment. It’s so crude by today’s standards, that I feel like something of a fossil as I recall it, but here goes. We had a ticker-tape kind of thing that would make a mark on a paper tape as you pulled the tape through. It marked at a constant frequency, so if you pulled the tape faster, the dots were farther apart. So dot spacing became a measure of speed.

The experiment consisted of two parts. In the first, we held the tape and walked a distance, swinging our arms like normal. In the second, we walked the same distance at the same speed, but holding our arms still.

In the first case, the dots tell a tale of acceleration and deceleration, repeated over and over as our arms moved forward and then backward. The second case showed no such variation; speed was consistent. But the trick was, if you averaged the speeds on the first one, you ended up with the exact same speed as the second one*. Which is obvious with just a little thought: it’s the speed we were actually walking.

This was an early case of, well, not sensor fusion, but, how about if we call it “implied signal extraction.” In this case, there was only one sensor (the tape), which is why there’s no fusion. But in modern times, such extraction might involve fusion.

Here’s the deal: the tape was directly measuring the speed of our hands, when what we were really interested in was the speed of our moving bodies. By averaging the hand movements, we were able to extract the implied body movement signal out of a raw hand movement signal that contained lots of potentially misleading artifacts.

This is happening in spades today in the navigation/orientation business. This will be obvious to the folks that have been trying to manage the problem for a while, but the rest of us may not realize how tough this is. We expect that, with our phones, we now have a way to navigate simply because our phone goes with us.

But put your phone in your hand. Now extend your arm forward: according to the phone, you just moved forward a foot or so. But you didn’t: your arm moved your phone forward; you didn’t go anywhere. Now put your phone in your back pocket, display to the outside. According to your phone, you just turned around. But you didn’t: you turned your phone around as you put it in your pocket. (Heck, the phone might even think you’re standing on your head if you put it in your pocket upside down.)

This drives at the art of orientation-to-trajectory management, a topic I discussed with Movea’s Tim Kelliher at Sensors Expo, and something Movea is working on. Unlike my high school scenario, where, if done right, we’re essentially averaging out a well-controlled sinusoidal movement, our phones go all over the place while we stand in one place. We pick it up, turn it around to orient it properly, switch hands, drop it, put it into one pocket or another, wave it randomly when we try to swat away that bee with our phone-holding hand.

Oh, and we can also do all of this while walking. Or running. Or dancing. Or running in random directions while we try to escape that bee, hands still aflail.

When you think about it, it’s got to be really hard to evaluate all of the sensor inputs on the phone and extract from that a signal that describes how the phone holder is moving. The more I think about it, the more I feel like I would have no idea how to start. Presumably some heuristics would be involved, but even then, it’s not obvious.

For instance, if the proximity sensor is firing, then you might assume that you’re probably on a call, and so conclude that the phone is stationary with respect to the body, up by your ear. That might be right 90% of the time, but then some goofball will, just for sh…ucks and grins, move the phone sultrily up and down along his or her body, keeping it close. The “on a call” heuristic would then decide that we’re walking up and down hills.

So when solutions to this problem are finally announced, I can imagine the aforementioned goofball types to try all kinds of things to see if they can fool the system. Typical silliness, but it also provides clues about how the algorithm works.

For the rest of us, well, let’s not take it for granted. This is a hard problem, and any effective solution will have been hard won.

 

*It actually didn’t work for me; my teacher declared, in frustration, that I needed to learn to walk a consistent speed. Not sure if I’ve mastered that yet; it’s not high on my bucket list…

Leave a Reply

featured blogs
Aug 12, 2020
Samtec has been selling its products online since the early 2000s, the very early days of eCommerce. We’ve been through a couple of shopping cart iterations since then. Before this recent upgrade, Samtec.com had been running on a cart system that was built in 2011. It w...
Aug 11, 2020
While Cadence System in Package (SiP) is '€“ and continues to be '€“ one of the most complete solutions for package design, the Virtuoso RF Solution gives access to a constantly increasing set of package... [[ Click on the title to access the full blog on the Cadence Com...
Aug 11, 2020
Making a person appear to say or do something they did not actually say or do has the potential to take the war of disinformation to a whole new level....
Aug 7, 2020
[From the last episode: We looked at activation and what they'€™re for.] We'€™ve talked about the structure of machine-learning (ML) models and much of the hardware and math needed to do ML work. But there are some practical considerations that mean we may not directly us...

Featured Video

Are You Listening?

Sponsored by Mouser Electronics

Inspiration doesn’t stick to a schedule. Luckily, creativity is a natural stimulant. Let Mouser Electronics help you on your way.

More information

Featured Paper

Improving Performance in High-Voltage Systems With Zero-Drift Hall-Effect Current Sensing

Sponsored by Texas Instruments

Learn how major industry trends are driving demands for isolated current sensing, and how new zero-drift Hall-effect current sensors can improve isolation and measurement drift while simplifying the design process.

Click here for more information

Featured Chalk Talk

Smart Embedded Vision with PolarFire FPGAs

Sponsored by Mouser Electronics and Microchip

In embedded vision applications, doing AI inference at the edge is often required in order to meet performance and latency demands. But, AI inference requires massive computing power, which can exceed our overall power budget. In this episode of Chalk Talk, Amelia Dalton talks to Avery Williams of Microchip about using FPGAs to get the machine vision performance you need, without blowing your power, form factor, and thermal requirements.

More information about Microsemi / Microchip PolarFire FPGA Video & Imaging Kit