editor's blog
Subscribe Now

Locus of (Con)Fusion

At the MEPTEC MEMS conference a couple weeks ago, one sensor fusion question kept coming up over and over: Who’s in charge of sensor fusion?

On the one hand, IMU makers in general are giving away sensor fusion packages that help integrate the data from the individual sensors in their combo units. Then there are guys like Movea that don’t make sensors themselves, but integrate across a wide variety of sensors for both high- and low-level motion artifacts (motion in their case, but the concept extends to anything).

So who’s job is it?

I happened to have a conversation with Movea’s Dave Rothenberg that same day, and I brought the topic up.

His first comment was that what most IMU makers refer to as sensor fusion is simply the software required to establish orientation, which is a relatively low-level characteristic. He said that this correlated to Movea’s Foundation series, which they’ve actually de-emphasized a bit since it is hard to sell against free software, even if they do think they do a better job.

The sensor guys say they’re the right place to do it because they know their sensors better than anyone else. That actually covers two separate things: the physical characteristics of the sensors and how they operate, and the low-level data details – formats etc. Dave mentioned that it is work for them to adapt their software to different sensors, since they don’t all look or speak alike. (Area for future possible standardization? Future topic…) But they have to get it right in order for the other pieces that lay over it to work properly: errors at the bottom level will compound as further algorithms manipulate them.

(This also ties into the question of loose vs tight coupling, since a sensor maker is in a better position to do things tightly.)

Of course, it’s unlikely that the sensor vendors will want to take on the higher-level algorithms since those, almost by definition, will, at some point, involve sensors that they don’t make. So it looks like things may go the way of the embedded world, where critical low-level drivers and other bits of firmware are provided by (or in close partnership with) the processor maker, with other companies layering higher-value stuff on top. That seems to be how the sensor world is shaping up, which leaves room both for the sensor guys and for the third-party folks.

Leave a Reply

featured blogs
Nov 23, 2022
The current challenge in custom/mixed-signal design is to have a fast and silicon-accurate methodology. In this blog series, we are exploring the Custom IC Design Flow and Methodology stages. This methodology directly addresses the primary challenge of predictability in creat...
Nov 22, 2022
Learn how analog and mixed-signal (AMS) verification technology, which we developed as part of DARPA's POSH and ERI programs, emulates analog designs. The post What's Driving the World's First Analog and Mixed-Signal Emulation Technology? appeared first on From Silicon To So...
Nov 21, 2022
By Hossam Sarhan With the growing complexity of system-on-chip designs and technology scaling, multiple power domains are needed to optimize… ...
Nov 18, 2022
This bodacious beauty is better equipped than my car, with 360-degree collision avoidance sensors, party lights, and a backup camera, to name but a few....

featured video

Unique AMS Emulation Technology

Sponsored by Synopsys

Learn about Synopsys' collaboration with DARPA and other partners to develop a one-of-a-kind, high-performance AMS silicon verification capability. Please watch the video interview or read it online.

Read the interview online:

featured paper

Algorithm Verification with FPGAs and ASICs

Sponsored by MathWorks

Developing new FPGA and ASIC designs involves implementing new algorithms, which presents challenges for verification for algorithm developers, hardware designers, and verification engineers. This eBook explores different aspects of hardware design verification and how you can use MATLAB and Simulink to reduce development effort and improve the quality of end products.

Click here to read more

featured chalk talk

"Scalable Power Delivery" for High-Performance ASICs, SoCs, and xPUs

Sponsored by Infineon

Today’s AI and Networking applications are driving an exponential increase in compute power. When it comes to scaling power for these kinds of applications with next generation chipsets, we need to keep in mind package size constraints, dynamic current balancing, and output capacitance. In this episode of Chalk Talk, Mark Rodrigues from Infineon joins Amelia Dalton to discuss the system design challenges with increasing power density for next generation chipsets, the benefits that phase paralleling brings to the table, and why Infineon’s best in class transient performance with XDP architecture and Trans Inductor Voltage Regulator can help power  your next high performance ASIC, SoC or xPU design.

Click here for more information about computing and data storage from Infineon