feature article
Subscribe Now

Sensor Chemistry

Chemistry – at least at the high-school level – can be fun stuff. You’ve got these fundamental entities called atoms that can come together in many ways to build molecules, which constitute the stuff of life (and non-life). At this simplistic level, there’s nothing smaller than an atom, and atomic behaviors, as depicted in the periodic table, determine which combinations work well and which work, well, not at all.

Movea has picked up on this theme to organize the elements of their sensor fusion offering (not to be confused with nuclear fusion – but then again, I guess that’s physics, not chemistry). They seem to be putting a lot of effort into this approach, not just for the sake of a nifty mnemonic, but specifically in terms of how the products work.

We looked at Movea’s MoveTV product last year; it was focused on TV remote controls and involved a limited number of gestures. What they’re talking about now goes beyond TV, addressing motion and gestures in general. You may recall from that prior discussion that Movea doesn’t make sensors at all – in fact, they don’t make any hardware (although they’re veering close with their MotionCore offering). They provide algorithms intended to transform the low-level sensor output – whether from a single sensor or, more likely, a combination of sensors – into something more meaningful.

They’ve identified small “nuggets” of processing that can be combined and rearranged in order to enable different kinds of processing for different purposes. They’ve then arranged these in their own version of a “periodic table.” Each column represents some fundamental concept: the first column is about angles; another one is about frequency. The farther down the column you go, the more advanced the processing becomes.


Table-of-Elements_sm.png

For example, the “frequency” column has three rows. The first two aren’t particularly descriptive: “basic” and “advanced” frequency. But the third one is a block referred to as “cadence” – in other words, presumably this has enough built-in processing to identify walking or other repetitive patterns. The “angles” column, meanwhile, goes from 1D angles through several editions of 3D angles, according to the sensors at play in the algorithm.

They also follow the periodic table when it comes to “reactivity.” Towards the left you find items that tend to combine well with many other modules in many different ways (like “angles”). As you move towards the right, you find more and more that the modules are more specialized and “stand-alone” – even noble or inert. The most advanced stand-alone module in the chart is “3D Hand” – which got the nod over “3D Foot” – presumably because hands are so much more expressive than feet, as any discussion with a New York cabbie will show.

This is all cute from a marketing point of view, but, with these creative ideas, you have to be careful that you don’t get caught up so much that your little trope ends up driving the strategy – unless the concept is so well wedded to the needs of the product that it becomes a useful sanity check. So far, they seem to have kept the thing rolling, with pictures of assembled application-specific molecules to show for it.

But what does this mean at a practical level? In discussing their MotionCore IP blocks, they divide their libraries into three levels: “Foundation,” “Premium,” and “Advanced.” (Which names immediately set some pricing expectations.)

At the foundation level, they identify one obvious basic capability: attitude – which direction am I facing? In addition, they have “utilities,” which provide various filters and conversions, and “calibration.”

Calibration is an unfortunate must-have. It starts with understanding what sensors are being used and how they express their measurements. As you combine sensors, you have to rationalize all the ways the sensors report data, pushing any conversions down to this very low level.

Then you need to account for any errors in placement. For example, if you place three sensors, one for each axis, on the same board or even within the same package, you’re relying on the assembly machinery to place them at exactly 90° from each other (at least for two out of the three dimensions). This can’t happen perfectly, of course, so you need to calibrate in order to compensate for the difference.

Finally, some sensors – gyroscopes in particular – experience drift as they operate, so they need to be periodically recalibrated even when in use. There are various “golden references” that can be used, depending on the application (moving the sensor in a figure-8 pattern is one).

So within the three foundation libraries, they have two blocks for calibration (off-line and real-time calibration); four for attitude (attitude from accelerator/gyroscope, attitude from accelerator/magnetometer, attitude from all three, and linear acceleration from all three); and four utility modules.

Navigation builds upon this basic organization, starting with a “step cadence meter” and moving up to a “cadence-to-speed conversion” and thence to “heading” and “dead reckoning.” Vertical position comes from a “pressure-to-dZ” module (not yet available), given the availability of a pressure sensor along with the three inertial sensors.

This gets you as far as the sensors themselves will take you, but there are other sources of data that you may be able to take advantage of: you may have access to the GPS signal, or you may be within cell tower range, or you may be near a WiFi hotspot. All of those can give some indication of location. In addition, maps can act as a constraint – particularly useful for dead-reckoning, to make sure that your position is consistent with reality (or, at least, the reality as portrayed by the map… which, we all know, has its own limitations).

This higher level of data fusion may go beyond what the Movea libraries provide, although they used it to check the accuracy of their sensor-only algorithms, achieving less than 3% error on distance. They do have a “Pedestrian navigation w/ map” module, so it does appear that they can integrate map information with their algorithms.

These are examples of the low-level modules – the atoms – that come together as applications – or molecules – in ways that depend specifically on the application. They’ve continued the chemistry theme into the design tool realm, where their “MoveaLab” is described as a “chemistry kit for designing the signal processing flows from sensor data to features.” With 49 blocks shown in their periodic table, you can build lots of combinations, although the lower-row items build on the upper-row functions, rendering some combos nonsensical. Still, they illustrate one “molecule” with 26 atoms, many of which are basic (gyro calibration, for example, or a Butterworth filter), and a few of which are higher level (like “orientation integration”).

The offering is pretty broad and – critically – is sensor-agnostic. The business challenge will be how much of this will be available for free from sensor manufacturers. Obviously, a company that makes one or two sensors will be likely to provide limited fusion software. Makers of a larger variety of sensors will have more latitude to give away more sophisticated software in order to sell more sensors. So Movea will be well motivated to keep pushing the level of integration higher, since the more they pull together in their algorithms, the less likely it will be that anyone else will give it away for free.

Naturally, free sounds good, but Movea is surely counting on the fact that their chemistry lab approach will not only provide modules that aren’t available elsewhere, but will also make it easy to pull this stuff together, making the development savings worth the price of the reagents.

 

More info: Movea: The Chemistry of Motion

 

17 thoughts on “Sensor Chemistry”

  1. How do you see the free-software-from-sensor-maker vs. third-party software thing shaking out? Where do you think the line will be drawn where people will pay for software?

  2. Pingback: GVK Bioscience
  3. Pingback: Petplay
  4. Pingback: DMPK
  5. Pingback: ADME Services
  6. Pingback: juegos friv
  7. Pingback: Boliden
  8. Pingback: Learn More
  9. Pingback: satta matka

Leave a Reply

featured blogs
Mar 28, 2024
The difference between Olympic glory and missing out on the podium is often measured in mere fractions of a second, highlighting the pivotal role of timing in sports. But what's the chronometric secret to those photo finishes and record-breaking feats? In this comprehens...
Mar 26, 2024
Learn how GPU acceleration impacts digital chip design implementation, expanding beyond chip simulation to fulfill compute demands of the RTL-to-GDSII process.The post Can GPUs Accelerate Digital Design Implementation? appeared first on Chip Design....
Mar 21, 2024
The awesome thing about these machines is that you are limited only by your imagination, and I've got a GREAT imagination....

featured video

We are Altera. We are for the innovators.

Sponsored by Intel

Today we embark on an exciting journey as we transition to Altera, an Intel Company. In a world of endless opportunities and challenges, we are here to provide the flexibility needed by our ecosystem of customers and partners to pioneer and accelerate innovation. As we leap into the future, we are committed to providing easy-to-design and deploy leadership programmable solutions to innovators to unlock extraordinary possibilities for everyone on the planet.

To learn more about Altera visit: http://intel.com/altera

featured chalk talk

Advanced Gate Drive for Motor Control
Sponsored by Infineon
Passing EMC testing, reducing power dissipation, and mitigating supply chain issues are crucial design concerns to keep in mind when it comes to motor control applications. In this episode of Chalk Talk, Amelia Dalton and Rick Browarski from Infineon explore the role that MOSFETs play in motor control design, the value that adaptive MOSFET control can have for motor control designs, and how Infineon can help you jump start your next motor control design.
Feb 6, 2024
7,258 views