feature article
Subscribe Now

EDA for Sensor Fusion

Movea’s SmartFusion Studio

When you’re building something that’s never been built before, you’ve got a lot more work in store for you than you would if you merely satisfied yourself with what’s already been done. There are lots of unknowns, and you’ve got to explore and solve them before you can proceed.

If you’re doing an everyday human-sized project in your workshop to build something using unusual materials, you have to figure out what adhesives or other fasteners will work, whether any of the materials might interact (aluminum touching iron?), whether paints will cover adequately… You might have to try different variants or different formulations until you get something that performs well.

The good news here is that you’re working at a human scale. You can get bits and pieces of materials and paints and glues and whatnot, and you can easily experiment to see how things work.

At the micro- and nano-scale, if you’re designing a new integrated circuit, you may also have to experiment. Of course, here, you can’t work with your hands to do it. Instead, you have a rich set of EDA tools and libraries and models that can help you through the experimentation.

But what if you’re trying to assemble motion algorithms based on the data being generated by sensors? Each sensor is a bit different; different algorithms and filters have different effects and effectiveness, and the underlying math can be daunting.

Yes, you could argue that navigating this maze is part of the price of greatness in the MEMS world. Then again, anyone who thought that manually cutting rubylith was one of the rites of righteousness in the chip world would be feeling pretty unrighteous anymore.

The truth is that, as sensors proliferate, designers are going to want to skip over the boring bits and get right to a design that works. Or, perhaps it’s better to flip this idea around: in order for sensors to proliferate, designers have to be able to use them without understanding what’s going on under the hood, just the way you drive a modern car that you probably can’t maintain yourself because it’s too complex (and if you can, you’re just being stubborn).

That’s the direction Movea is trying to head with their evolving SmartFusion Studio. They unveiled the tool earlier this year, but it’s still a work in progress. It’s been an ongoing project for many years, since they’ve been using it themselves for their own algorithm development, and now they’re moving towards making it available – with customer-friendly features – to outsiders.

The idea is to be able to configure an arrangement of sensors, calibrate them, and then apply algorithms to see how a fused output responds. It can work with Movea’s atoms and molecules; they plan to have a drag-and-drop capability for those and other elements in the future.

I got to see a demo with the drag-and-drop capability in place. They were working with a Nexus phone communicating over WiFi with the tool on a host PC. They captured the motion in a file that could be replayed in the tool to evaluate how algorithms affected that motion. In a really simple example, the raw motion – or even lack of motion – was highly jittery. A Butterworth filter (one of a large array of filters available, making me wonder how you would know which ones to try) was dragged onto the pitch; the design was recompiled (critically, it took only a second or two); and now the motion was nice and smooth.

The idea is pretty straightforward; sort of like a Matlab for sensor fusion. Lots of options to experiment with, and, with a quick compile, it’s not expensive to do so. There is a broad collection of basic viewers, and Movea can help to develop custom “slides,” which are essentially views that correspond to a specific application. But it has its own set of complicating factors to deal with.

Calibration, for example. For combo sensors, the manufacturer will typically provide the calibration information. For other sensors, you might have to calibrate it yourself – especially early in the life of the sensor. The tool takes you through a calibration procedure so that you can fill in the necessary coefficients. Later in the maturity cycle, a default set of calibration values will have been agreed on, so at that stage, you might be willing to live with that.

What platform resources you can access depends on how a platform is configured and what your development role is. Platform developers, who need to see low-level sensor details, will install the tool at the root level to bypass the protections and abstractions that the OS provides (or imposes, depending on your view). Apps developers, on the other hand, will be coding for a fixed, finished platform, meaning they have to work above the OS. For these guys, it’s impossible to access the low-level stuff unless the platform maker has provided some end-around to get to low-level data through a custom API.

The design flow itself will depend on who is doing the work. Platform developers will be capturing low-level sensor outputs in a file and replaying them in the tool, compiling for integration into the embedded application. App developers, on the other hand, will be developing at the OS API level, relying on the services provided by the embedded infrastructure and resulting in a standalone app.

For platform developers, the tool can be configured to run the software on the application processor or a sensor hub. Configurations for a variety of processors and sensor hubs are available.

There’s also the question of which fusion algorithm to use. Some sensors come pre-equipped with hard-coded algorithms; others don’t. And if your sensor has, for instance, a built-in tap function, then why would you do that up at the software level?

Well, if you’re going to be in a phone or tablet, you’re going to have to be willing to share the stage: those guys demand at least two vendors for any component. Problem is, different vendors do things differently, so one guy’s tap function won’t be like another guy’s. Which means using an agnostic algorithm at a higher level that will abstract away the sensor differences.

Exactly how this product will be bundled and sold hasn’t been completely decided yet. As we’ve seen, there are two different audiences they’ll be targeting. The first will be the phone manufacturer. They’ll be developing the low-level drivers that the OS will rely on. These guys need to dive deeper, and, by definition, they have flexibility in the platform because they can make changes if necessary.

Later, Movea will target the app developer community, which has no choice when it comes to how a platform is configured. Once they’ve targeted something like Android, then they have to live within the strictures that it imposes.

Movea will also be making more demo apps available – at the block level. The lower-level code that the tool generates or works with isn’t open.

So that’s a broad-brush look at what is now the broad concept of the SmartFusion Studio. We should be seeing more on this later this year as specific product configurations are announced.

 

 

More info:

Movea SmartFusion Studio

One thought on “EDA for Sensor Fusion”

  1. Movea is trying to bring to the market a tool for assembling motion and other algorithms. How do you do this day without such tools? Or do you?

Leave a Reply

featured blogs
Apr 19, 2024
Data type conversion is a crucial aspect of programming that helps you handle data across different data types seamlessly. The SKILL language supports several data types, including integer and floating-point numbers, character strings, arrays, and a highly flexible linked lis...
Apr 18, 2024
Are you ready for a revolution in robotic technology (as opposed to a robotic revolution, of course)?...
Apr 18, 2024
See how Cisco accelerates library characterization and chip design with our cloud EDA tools, scaling access to SoC validation solutions and compute services.The post Cisco Accelerates Project Schedule by 66% Using Synopsys Cloud appeared first on Chip Design....

featured video

How MediaTek Optimizes SI Design with Cadence Optimality Explorer and Clarity 3D Solver

Sponsored by Cadence Design Systems

In the era of 5G/6G communication, signal integrity (SI) design considerations are important in high-speed interface design. MediaTek’s design process usually relies on human intuition, but with Cadence’s Optimality Intelligent System Explorer and Clarity 3D Solver, they’ve increased design productivity by 75X. The Optimality Explorer’s AI technology not only improves productivity, but also provides helpful insights and answers.

Learn how MediaTek uses Cadence tools in SI design

featured chalk talk

GaN FETs: D-Mode Vs E-mode
Sponsored by Mouser Electronics and Nexperia
The use of gallium nitride can offer higher power efficiency, increased power density and can reduce the overall size and weight of many industrial, automotive, and data center applications. In this episode of Chalk Talk, Amelia Dalton and Giuliano Cassataro from Nexperia investigate the benefits of Gan FETs, the difference between D-Mode and E-mode GaN FET technology and how you can utilize GaN FETs in your next design.
Mar 25, 2024
3,598 views