posted by Bryon Moyer
Wearable electronics is the coming thing, and fitness-related gear is the most obvious thing to wear. And CES had a huge section dedicated to these semi-health devices. “Semi” because it’s this nice cozy niche where you can do things that affect your health with no required FDA approval.
But the scale of integration is pretty astounding. One example was a company called Valencell that has designed sensors that fit into an earbud. Actually, it’s more than just the sensor – there’s a lot of computing that goes on in that little thing that still has to be comfortable to wear (which was a challenge for them).
First, they have an IR emitter and detector that senses heart rate. Then they have an accelerometer to estimate pace, distance, and step rate. All of this information can be used to estimate oxygen and calories burned.
The sensors interact with your phone, but they don’t rely on the phone to do all the work: there’s also a DSP in the headset that processes the sensor data. What the phone gets is the final result for display to the runner.
Some of the challenges – in addition to the simple issues of size and comfort – included:
- Resting heart rate is pretty straightforward to detect, but when running, there’s so much noise that it’s hard to reject the extraneous artifacts.
- Indoor and outdoor light profiles are very different; the system has to handle both. The sun in particular has lots of IR in its light spectrum, and that has to be rejected. They have to be able to handle running into and out of shadows.
- They can detect your stride +/- 10% if constant, or you can train it to get to +/- 5%. They can interpret transitions between walking and running.
As a “running” app with an accelerometer, it might be tempting to think of this as a navigation thing that it’s detecting, but it’s not; it’s interpreting the bumping around as you bounce with each step. It knows how many steps you took; it has no idea where those steps went.
They don’t make the end products themselves; they license the technology to audio/headset makers for integration into their systems, whether wired or wireless (e.g., Bluetooth). You can find more at their website.
posted by Jim Turley
I'm not a big fan of "cloud computing," as EEJ readers know. It seems like a stap backwards, to 1970s-era timeshare machines instead of the fast, cheap, and ubiquitous devices we have today. Sure, it's great business for the cable and wireless companies. But is it a good deal for us?
Today I stumbled across a brief article by Cory Doctorow (boing-boing) that explains one of the fundamental mind-warps of the whole cloud-computing mass hallucination. The money line: "It's easy to see why telcos would love the idea that every play of 'your' media involves another billable event... It's that prized, elusive urinary-tract-infection business model at work, where media flows in painful, expensive drips instead of healthy, powerful gushes." Testify, brother!
posted by Bryon Moyer
We’re used to touch being about locating one or more fingers or items on a surface. This is inherently a 2D process. Although much more richness is being explored for the long-term, one third dimension that seems closer in is pressure: how hard are we pushing down, and can we use that to, for instance, grab an object for dragging?
At the 2011 Touch Gesture Motion conference, one company that got a fair bit of attention was Flatfrog, who uses a light-based approach, with LEDs and sensors around the screen to triangulate positions. At the 2012 Touch Gesture Motion conference, when 2D seemed so 2011, pressure was a more frequent topic of conversation. But clearly a visual technology like Flatfrog’s wouldn’t be amenable to measuring pressure since there is nothing to sense the pressure.
If you have a squishy object like a finger, then you can use what I’ll call the squish factor to infer pressure. This is what Flatfrog does: when a finger (for example) touches down, they normalize the width of the item, and then they track as that width widens due to the squishing of the finger (or whatever). Which means that this works with materials that squish. Metal? Not so much.
You might wonder how they can resolve such small movements using an array of LEDs that are millimeters apart. For a single LED and an array of sensors, for example, the resolution might indeed be insufficient. But because they have so many LEDs, the combined measurements from all of them allow them to resolve small micro-structures.
There is a cost to this, of course, in processing: it adds about 100 million instructions per second to the processing. “Ouch!” you say? Actually, it’s not that bad: their basic processing budget without pressure is about 2 billion instructions per second, so this is about a 5% adder.
More information at their website…