Jul 01, 2013

Software Development is Failing

posted by Dick Selwood

I wanted to share this, which I found on the System Safety Mailing list: https://lists.techfak.uni-bielefeld.de/mailman/listinfo/systemsafety

Martyn Thomas has enormous experience in implementing computer systems and was the founder of Praxis a company, now part of the Altran group, that became “internationally recognised as a leader in the use of rigorous software engineering, including mathematically formal methods.” He wrote in a thread about software:

I recall a lecture given by Dijkstra in 1973. A member of the audience asked "do your methods work on real world problems?" Dijkstra paused, and then said quietly "real world problems. Ah yes, those that remain when you have failed to apply all the known solutions".
Over the years, I have heard many excuses for failures to use professional engineering methods.
"if we train the programmers, they'll leave for a better paid job"
"we can't hire programmers who are willing to use that programming language"
"universities don't teach (maths, project management, quality control, planning, team working ... ...)"
"the customer insists that we use this (buggy) middleware for compatibility"
"modern software isn't written - it's assembled from lots of (buggy) COTS"
"if we try to include that in the standard, industry will revolt."
"if we were to ask for that evidence, industry would charge us a fortune"
... and many many more.
Most software developers appear to have lost sight of the problem. Every week, I hear someone use the verb "test" when what they mean is "gain assurance that  ... is fit for purpose"; this reveals a dangerous, implicit assumption that "test-and-fix" is the only practical way to develop software. Most software is still written in languages without good data structures and strong type-checking. Most software requirements (and even interface specifications) are written in English (or another natural language) - perhaps with some diagrams that lack any rigorous semantics. Most projects have grossly inadequate change control. I rarely see a risk register that is worth anything (except as a demonstration that the project manager isn't managing the project).
Is there another trade that (a) builds complex, novel and critical systems using poorly-qualified staff, (b) almost exclusively uses tools that have major known defects, (c) builds systems from components of unknown provenance that cannot be shown to be fit for purpose and (d) nevertheless claims to be professional engineers?
Surely it is self-evident that the current state of our profession is unsustainable. Let's stop making excuses and look for ways to accelerate the changes that we know are needed.

Martyn, it seems to me, is putting forward a very accurate view of the whole software arena. Even in the safety-critical arena there is still too little concern for these issues. How do we go about resolving this? Or is it too late to push the genii back into the bottle?

Tags :    1 comment  
Jun 27, 2013

Separating You from Your Phone

posted by Bryon Moyer

In high-school physics class, we did an experiment. It’s so crude by today’s standards, that I feel like something of a fossil as I recall it, but here goes. We had a ticker-tape kind of thing that would make a mark on a paper tape as you pulled the tape through. It marked at a constant frequency, so if you pulled the tape faster, the dots were farther apart. So dot spacing became a measure of speed.

The experiment consisted of two parts. In the first, we held the tape and walked a distance, swinging our arms like normal. In the second, we walked the same distance at the same speed, but holding our arms still.

In the first case, the dots tell a tale of acceleration and deceleration, repeated over and over as our arms moved forward and then backward. The second case showed no such variation; speed was consistent. But the trick was, if you averaged the speeds on the first one, you ended up with the exact same speed as the second one*. Which is obvious with just a little thought: it’s the speed we were actually walking.

This was an early case of, well, not sensor fusion, but, how about if we call it “implied signal extraction.” In this case, there was only one sensor (the tape), which is why there’s no fusion. But in modern times, such extraction might involve fusion.

Here’s the deal: the tape was directly measuring the speed of our hands, when what we were really interested in was the speed of our moving bodies. By averaging the hand movements, we were able to extract the implied body movement signal out of a raw hand movement signal that contained lots of potentially misleading artifacts.

This is happening in spades today in the navigation/orientation business. This will be obvious to the folks that have been trying to manage the problem for a while, but the rest of us may not realize how tough this is. We expect that, with our phones, we now have a way to navigate simply because our phone goes with us.

But put your phone in your hand. Now extend your arm forward: according to the phone, you just moved forward a foot or so. But you didn’t: your arm moved your phone forward; you didn’t go anywhere. Now put your phone in your back pocket, display to the outside. According to your phone, you just turned around. But you didn’t: you turned your phone around as you put it in your pocket. (Heck, the phone might even think you’re standing on your head if you put it in your pocket upside down.)

This drives at the art of orientation-to-trajectory management, a topic I discussed with Movea’s Tim Kelliher at Sensors Expo, and something Movea is working on. Unlike my high school scenario, where, if done right, we’re essentially averaging out a well-controlled sinusoidal movement, our phones go all over the place while we stand in one place. We pick it up, turn it around to orient it properly, switch hands, drop it, put it into one pocket or another, wave it randomly when we try to swat away that bee with our phone-holding hand.

Oh, and we can also do all of this while walking. Or running. Or dancing. Or running in random directions while we try to escape that bee, hands still aflail.

When you think about it, it’s got to be really hard to evaluate all of the sensor inputs on the phone and extract from that a signal that describes how the phone holder is moving. The more I think about it, the more I feel like I would have no idea how to start. Presumably some heuristics would be involved, but even then, it’s not obvious.

For instance, if the proximity sensor is firing, then you might assume that you’re probably on a call, and so conclude that the phone is stationary with respect to the body, up by your ear. That might be right 90% of the time, but then some goofball will, just for sh…ucks and grins, move the phone sultrily up and down along his or her body, keeping it close. The “on a call” heuristic would then decide that we’re walking up and down hills.

So when solutions to this problem are finally announced, I can imagine the aforementioned goofball types to try all kinds of things to see if they can fool the system. Typical silliness, but it also provides clues about how the algorithm works.

For the rest of us, well, let’s not take it for granted. This is a hard problem, and any effective solution will have been hard won.

 

*It actually didn’t work for me; my teacher declared, in frustration, that I needed to learn to walk a consistent speed. Not sure if I’ve mastered that yet; it’s not high on my bucket list…

Tags :    0 comments  
Jun 26, 2013

Simpler CDC Exception Handling

posted by Bryon Moyer

For static timing analysis, it’s a concept that goes back years. You get a bunch of violations, and then you have to decide which ones represent false paths or multi-cycle paths and create “exceptions” for them. Tedious.

Well, apparently formal analysis can have the same issue. Only here they’re referred to as “waivers,” according to Real Intent. If you run analysis and get a long list of potential violations, you have to go through the list and, one by one, check them for “false positives” and mark them as such. Time-consuming and error-prone. And tedious. Especially when working on large-scale SoCs (so-called “giga-scale”).

In their latest release of Meridian CDC, which does clock-domain crossing verification, Real Intent has provided a different way of handling this: provide more granular control over the run parameters in the form of rules or constraints that can be successively refined.

Using the old method, if a particular over-reaching aspect of analysis caused 100 false positives, you’d have to find all 100 and “waive” them. With the new approach, when you find the first one, you make the refinement, and then, with a rerun of the analysis, the one you found and the other 99 all disappear. OK, not disappear per se, but they’re grouped together as not being an unexpected finding. You can also review that list to make sure nothing snuck through. (This is a simplification of a more sophisticated overall process, but it captures the essence.)

This may take some iterations, but in the end, you can have a clean run with no exceptions, and the way you got there is less likely to have involved a mistake here or there.

You can find out more about Real Intent’s latest Meridian CDC release in their announcement.

Tags :    0 comments  
Get this feed  

Login Required

In order to view this resource, you must log in to our site. Please sign in now.

If you don't already have an acount with us, registering is free and quick. Register now.

Sign In    Register