editor's blog
Subscribe Now

Fusing the Little Details

It’s always struck me that there seem to be two critical elements to sensor fusion. There’s the part that can be resolved with math – for instance, compensating a magnetometer reading to account for the tilt as measured by an accelerometer – and then there’s the heuristic part. The latter deals with, for example, deciding that your gyro reading makes no sense and deferring to the compass instead to give you a heading. And while the math in the first part is more or less universal for all players, the heuristics would provide more of an opportunity for differentiation.

In a conversation at the recent MEMS Executive Congress, Movea’s Bryan Hoadley noted that there’s actually more to it than that. First of all, I should note that they’re touting the phrase “data fusion” rather than just “simple” sensor fusion. That would partly be due to the fact that they’re trying to raise the level of abstraction far above simple low-level fusion (as indicated by their periodic table and the fact that they’re doing analysis on running gaits and tennis serves), but also because, in many cases, data is included that doesn’t come from a sensor.

The classic example of that would be a navigation algorithm that not only uses IMU data, but also GPS or even speedometer data. (OK, I guess a speedometer is a sensor, albeit a pedestrian one… or… wait, no, a pedometer would be pedestrian… GPS? That’s less obvious.) Add map data and now you’re unquestionably fusing more than sensor data. You’re fusing data, some of which comes from sensors.

There’s one other element that comes along with this, according to Mr. Hoadley. It may sound trivial or inconsequential, but it matters, and it’s kind of like taking a look back into the kitchen of your favorite gourmet restaurant: it’s way less glamorous than the dining room. In addition to the math and the heuristics are the logistics of managing all the data and the data formats correctly and efficiently.

(Reminds me of the college programming project where I took the core assignment and simply added some I/O to it that wasn’t required. A couple ill-conceived all-nighters later and my code was 10% algorithmic stuff that mattered and 90% crap for getting data in and out. Which was worth, like, 3% extra in bonus credit. My first lesson in ROI.)

The point being, there’s more to the cooking than creating pretty stacks of elegant food (which will topple when the first fork hits it); there’s lots of boring, mundane food prep.

I’ve actually asked the question before as to whether these data formats could be simplified by any sorts of standards or unification; it’s one area where there doesn’t seem to be enough pain to worry. Either that, or the early movers have already solved the problem themselves and the chaos now acts as an entry barrier to others.

Leave a Reply

featured blogs
Apr 16, 2021
The Team RF "μWaveRiders" blog series is a showcase for Cadence AWR RF products. Monthly topics will vary between Cadence AWR Design Environment release highlights, feature videos, Cadence... [[ Click on the title to access the full blog on the Cadence Community...
Apr 16, 2021
Spring is in the air and summer is just around the corner. It is time to get out the Old Farmers Almanac and check on the planting schedule as you plan out your garden.  If you are unfamiliar with a Farmers Almanac, it is a publication containing weather forecasts, plantin...
Apr 15, 2021
Explore the history of FPGA prototyping in the SoC design/verification process and learn about HAPS-100, a new prototyping system for complex AI & HPC SoCs. The post Scaling FPGA-Based Prototyping to Meet Verification Demands of Complex SoCs appeared first on From Silic...
Apr 14, 2021
By Simon Favre If you're not using critical area analysis and design for manufacturing to… The post DFM: Still a really good thing to do! appeared first on Design with Calibre....

featured video

The Verification World We Know is About to be Revolutionized

Sponsored by Cadence Design Systems

Designs and software are growing in complexity. With verification, you need the right tool at the right time. Cadence® Palladium® Z2 emulation and Protium™ X2 prototyping dynamic duo address challenges of advanced applications from mobile to consumer and hyperscale computing. With a seamlessly integrated flow, unified debug, common interfaces, and testbench content across the systems, the dynamic duo offers rapid design migration and testing from emulation to prototyping. See them in action.

Click here for more information

featured paper

Understanding the Foundations of Quiescent Current in Linear Power Systems

Sponsored by Texas Instruments

Minimizing power consumption is an important design consideration, especially in battery-powered systems that utilize linear regulators or low-dropout regulators (LDOs). Read this new whitepaper to learn the fundamentals of IQ in linear-power systems, how to predict behavior in dropout conditions, and maintain minimal disturbance during the load transient response.

Click here to download the whitepaper

featured chalk talk

Cutting the AI Power Cord: Technology to Enable True Edge Inference

Sponsored by Mouser Electronics and Maxim Integrated

Artificial intelligence and machine learning are exciting buzzwords in the world of electronic engineering today. But in order for artificial intelligence or machine learning to get into mainstream edge devices, we need to enable true edge inference. In this episode of Chalk Talk, Amelia Dalton chats with Kris Ardis from Maxim Integrated about the MAX78000 family of microcontrollers and how this new microcontroller family can help solve our AI inference challenges with low power, low latency, and a built-in neural network accelerator. 

Click here for more information about Maxim Integrated MAX78000 Ultra-Low-Power Arm Cortex-M4 Processor