Jul 03, 2013

Sensing the Turn

posted by Bryon Moyer

This is yet another note regarding the innumerable sensors on display at the recent Sensors Expo. But rather than jumping straight in, let’s explore a problem: one akin to “shaft encoding.”

Those of you controlling precision motors and such know far better than I do about keeping track of the rotating shaft of the motor. By tracking marks on the shaft, the electronics can keep track of the position of the shaft (not to mention speed and other related parameters).

But can we apply that to, say, a steering wheel to remove the mechanical linkages? After all, a steering wheel is the same thing, only your hands are the motor. So, in theory, it should work. But there’s a catch: When the power goes off, the system loses its mind. So when you power back on, the system doesn’t know where the steering wheel was left last time it was touched.

You could suggest that a piece of the electronics remain powered on (if low enough power) to keep track even when the motor is off. But if you change the battery, or if Jr. Samples decides to apply the skills he learned on that burned out ol’ 52 Chevy pickup in the back 40 and disconnects the battery, then the system loses its mind again. So when the car starts up, it’s like it’s waking from a bad dream and not knowing where it is.

So simple shaft encoding won’t work; we need something that persists with no power. That would suggest a magnet. For instance, you could place a magnet on the shaft and then detect which direction the magnet is facing. Or put a magnet around the shaft and put the sensor on the shaft. But that only works for applications that use at most one turn. That’s certainly not the case for your grandfather’s Oldsmobile, where a simple lane change required 20 turns of the wheel.

So you can add translation to the rotation: put a thread on the shaft and have either the magnet or the sensor ride on a carrier that moves along the thread. So as you execute multiple turns of the wheel, the carrier slides up and down the shaft (rather than rotating with the shaft). We’ve now translated the multiple rotations into a linear distance, and the strength of the sensed magnetic field can tell us how far we’ve traveled. And it will work when the car starts.

This is the approach that AMS has taken on the AS5410 “absolute position” sensor they had on display. Specifically, they put the magnet on the carrier and use a 3D Hall-effect sensor in a fixed position. The thing that apparently makes this a first is that the sensor can reject stray fields using differential techniques. This can actually mean using several magnetic sensors, so it’s a bit more complicated than my simplistic description… but then again, most things are.

You can find out more info here.

Tags :    0 comments  
Jul 02, 2013

IP Block Verification

posted by Bryon Moyer

If you design SoCs, then you use IP. Lots of it, probably. From different companies, some perhaps even from your own company.

And the good news is, it’s all perfectly documented – pins, registers, timing, everything. Right? So you know that just fitting it all together will give you a correct-by-construction design. Right?

Yeah… and then you wake up.

In fact, the RTL implementation may deviate from the spec, or there may be holes in the spec, or the black-box RTL may have invisible surprises. It’s enough to make you run back to the comfort of your pillow.

Jasper and Duolog, at the urging of ARM, have come together to try to solve some of this. The first key ingredient is a machine-friendly way of describing an IP block. And that would be IP-XACT. IP-XACT doesn’t describe the IP implementation; it’s simply (if “simple” can be used here) a specification of the metadata and the interface. Like a software function or object prototype. (To be clear, Jasper and Duolog didn’t create IP-XACT; it’s been around for a while, and they simply make use of it.)

Given spec’ed and implemented versions of an IP block, Duolog and Jasper can then confirm whether specs match RTL or black-box matches white-box. That’s the first of two tools that will be available.

The second will help assemble the IP blocks into a design and then verify that everything is connected properly. “How hard can that be?” you ask. Well, given that some connections may come and go over time or given various conditions (for instance, via multiplexing), and the fact that some IP can have hundreds (or more) of connections, it can actually get pretty complicated. The tools purport to handle these scenarios, including such timing details as latency.

This all got rolled out at DAC, so it’s available today. You can find out more in their release.

Tags :    0 comments  
Jul 01, 2013

Software Development is Failing

posted by Dick Selwood

I wanted to share this, which I found on the System Safety Mailing list: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety

Martyn Thomas has enormous experience in implementing computer systems and was the founder of Praxis a company, now part of the Altran group, that became “internationally recognised as a leader in the use of rigorous software engineering, including mathematically formal methods.” He wrote in a thread about software:

I recall a lecture given by Dijkstra in 1973. A member of the audience asked "do your methods work on real world problems?" Dijkstra paused, and then said quietly "real world problems. Ah yes, those that remain when you have failed to apply all the known solutions".
Over the years, I have heard many excuses for failures to use professional engineering methods.
"if we train the programmers, they'll leave for a better paid job"
"we can't hire programmers who are willing to use that programming language"
"universities don't teach (maths, project management, quality control, planning, team working ... ...)"
"the customer insists that we use this (buggy) middleware for compatibility"
"modern software isn't written - it's assembled from lots of (buggy) COTS"
"if we try to include that in the standard, industry will revolt."
"if we were to ask for that evidence, industry would charge us a fortune"
... and many many more.
Most software developers appear to have lost sight of the problem. Every week, I hear someone use the verb "test" when what they mean is "gain assurance that  ... is fit for purpose"; this reveals a dangerous, implicit assumption that "test-and-fix" is the only practical way to develop software. Most software is still written in languages without good data structures and strong type-checking. Most software requirements (and even interface specifications) are written in English (or another natural language) - perhaps with some diagrams that lack any rigorous semantics. Most projects have grossly inadequate change control. I rarely see a risk register that is worth anything (except as a demonstration that the project manager isn't managing the project).
Is there another trade that (a) builds complex, novel and critical systems using poorly-qualified staff, (b) almost exclusively uses tools that have major known defects, (c) builds systems from components of unknown provenance that cannot be shown to be fit for purpose and (d) nevertheless claims to be professional engineers?
Surely it is self-evident that the current state of our profession is unsustainable. Let's stop making excuses and look for ways to accelerate the changes that we know are needed.

Martyn, it seems to me, is putting forward a very accurate view of the whole software arena. Even in the safety-critical arena there is still too little concern for these issues. How do we go about resolving this? Or is it too late to push the genii back into the bottle?

Tags :    1 comment  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register