feature article
Subscribe Now

Disziplin Muß Sein*

A Look at Recent Software Development Process Tool Announcements

Software development processes can vary dramatically. If you program only occasionally as a hobby, like me, then you dream up what you want to do and immediately start coding. Working units then randomly materialize and just as quickly disappear like quantum fluctuations. Moving into the more professional arena, if there is a process, it can vary from something light, agile, and extreme, where code is generated quickly and converges towards requirements using Brownian successive approximation, all the way to heavyweight rational processes with which you risk spending your entire career just reading the manual (understanding it would take even longer; lifting the entire document is out of the question).

The idea behind any process is that you can generate better software more quickly if you employ some form of discipline during development. And discipline is generally not considered fun when you’re coding cool new stuff. So the cost of being disciplined has to be balanced by a tangible benefit. Just knowing that “it’s a better way to do things” usually won’t suffice. There has to be a cost to slovenly coding.

So, not surprisingly, process is used much more often in embedded systems where something going wrong cannot be tolerated. We’re talking airplanes, munitions, medical equipment, cars. Granted, the processes tend to come not from, “we need to do things this way,” but rather at the insistence of the systems manufacturers, saying, “you need to do things this way.” What’s good is that, over the years, tools have evolved in a direction that makes being a good software citizen somewhat easier.

There seem to be two ways in which process tools become useful. Some tools make adherence to rules easier by automating some of the brainless aspects that are just a pain in the butt (brain and butt being more or less mutually exclusive for most of us) and that make a good excuse for jettisoning the process outright. Others help keep track of how well a process is being followed. Yes, this is supposed to be a good thing. Like the kid in a bowtie who always reminds the teacher that he or she forgot to assign the homework. Like alerting your boss on how well you’ve followed the rules.

Coincident with this fall’s Embedded Systems Conference in Boston, there were a number of announcements made by companies participating in this discipline business (although it bears noting that none of them wore leather or spiked collars). Some focused on portions of the process; some looked at the process as a whole. The “whole process” could be summarized as starting with use cases, moving from those into requirements, from which both code and test cases can be generated, after which the code is tested according to the test cases. Obviously an approximation; your mileage may vary.

What do you mean you don’t want to just take my word for it?

Among the releases by LDRA was one reflecting their increasing focus on tests that gauge whether or not software as implemented is meeting the “contract” implied by the original requirements. IBM Telelogic made an announcement in this area as well. Such tests provide a feedback loop back to requirements, making possible an assessment of the “requirements coverage” of an implementation.

LDRA works from requirements to derive test cases using their TBrun program, which has just been released in a standalone version. This is an example of the time-saving nature of such a tool: the program can automatically extract critical information from a software unit and build a structure that can be filled in by the developer for testing as well as managing the integration of units and execution of tests through an automatically-created harness.

IBM Telelogic’s story comes from a slightly different direction, as it reflects the… um… rationalization of IBM’s acquisitions of Rational and Telelogic. Both companies have testing technology, but Rational’s Test Realtime focuses on what they call “structural” tests – that is, tests that check out the integrity of software with no reference to intent or requirements, static and dynamic analysis being representative examples. Telelogic’s Rhapsody Test Conductor approaches test from the requirements side, again providing a path from requirements to test and back to ensure that design intent has been met. These two test capabilities have now been integrated so that they can be applied together in a single environment.

Meanwhile, in a very specific corner of the software world, one finds the community of Ada programmers, and, in a very specific neighborhood of that community, one finds those versed in SPARK. SPARK is actually two things. First, it is a very carefully constructed subset of the Ada language that is completely unambiguous and lends itself to formal proofs of correctness. Second, it provides an explicit way of annotating an Ada program with specific intent using text that looks to a compiler like a comment, but that looks to an analyzer like a description that should correspond exactly to the code in the unit. For example, if a procedure is annotated to say that a variable foo should not change, then an analysis tool can check to see if it’s at all possible for foo to be changed, and, if it is, an error can be flagged.

SPARK makes sense only for extremely sensitive, critical programs where correctness is paramount. Discipline is enforced because every property of the program can be formally proven, and code must match annotated intent. Adacore, who makes Ada development tools, and Praxis, a system engineering company, have announced their intent to collaborate in order to promote what Praxis calls their SPARK-oriented Correct by Construction methodology to improve the integrity of mission- and safety-critical software.

Sorry, that’s our policy

Zooming out a bit, Parasoft Embedded, while announcing a new release of their C++test toolkit, which can auto-generate unit tests to maximize coverage, also discussed their broader view of tools broadly referred to as Software Development Lifecycle (SDLC) tools. Parasoft focuses on “policy-based” development, where policies can vary widely from the specific “release all dynamic memory at the end of a unit” to the broad “you must conduct a peer code design review.”

There’s a balance here that needs to be struck, since these kinds of tools need to be active without getting in the way. Once integrated, the policies and policing tools can have a surprisingly broad reach. On a “private” level, they can let a developer know when something’s been missed, which should generally be a good thing. If something “stupid” is flagged, it’s intended to spur a policy decision for the team rather than hate against the tool for focusing on trivial items. If the team decides an issue is trivial, it can then be taken off the policy; if not, then it’s the team, not the tool, being pedantic.

On the more public side of things, apparently all kinds of scorekeeping can be done, with developers able to see their rank against their other (unnamed) compatriots. Test results and other metrics can actually be sent automatically to select Blackberries or iPhones. Nothing like knowing that the tool is telling your boss where you rank to add a little motivation to your Monday. Who needs sleep when you can have stress instead?

Just where did you get that?

Finally, there’s a completely new management angle being exploited by Protecode: IP. It is becoming more and more common for code to be re-used, and a company may “re-use” other software in addition to their own. This need not be nefarious; developers may have purchased IP or they may be including open-source software. But along with any software come possible licensing issues and the fear of contamination. If you establish an IP policy for your company, how can you be sure it’s being followed? And even if it’s being followed, are traceability requirements being met? Can you account for where all the software came from – particularly the part your team didn’t write directly? Do you know if you’ve inadvertently introduced malicious or sloppy code?

Protecode has a set of tools that can track and check the IP being used in a project. This can be done in one of two modes: real-time and batch. The real-time version works in conjunction with the design environment to monitor activity: when, for example, code is pasted into a project, this event is noted and recorded. If a policy is violated, then an alert is issued, which can be dispositioned, with a record being kept of how it was dispositioned. The batch mode is not tied to the design environment and analyzes the files in a directory all at once rather than during actual coding.

The analysis is rather different from what you might be used to. Protecode maintains what they call a code “pedigree.” A company can subscribe to the pedigree, in which case checks are done across the internet against the main stored pedigree. For companies like defense, where internet access may be forbidden, a copy of the pedigree can be installed on-site and updated. This pedigree, wherever kept, can be considered a repository of lots and lots of code that exists out in the world, and it essentially allows the tool to recognize something being pasted and record its provenance.

The comparison isn’t made by actually shipping code across to do the test, which would possibly violate containment policies, but instead hashes the code and compares the hash to known hashes in the pedigree. Many lines of code are hashed together and compared as a chunk. You can actually limit the minimum size that will be checked, since it’s much more likely that small chunks of code will look like small chunks of code the world around and trigger false positives. So you can set a limit of, say, 50 lines of code before a test is triggered.

If something isn’t tested against the pedigree or doesn’t match anything in the pedigree, that doesn’t necessarily mean that code can sneak in unnoticed; it’s just that the inserted code won’t automatically be flagged as violating a policy if in fact it does. A record of the “paste” event (or whatever) will still be kept.

So while individual bits of news may highlight only specific tools or upgrades, taken together, they paint a picture of a growing set of capabilities for inserting discipline into the development process in a manner that can feel less painful. In the past, it might have felt like management would hand down some lame rules that created a ton of work for no good reason other than management’s ability to brag about the rules they impose; now management has an opportunity to put some assistance behind those rules in the chance that they might actually be followed.

*Loosely, “We Must Have Discipline, People!”

Links:
LDRA’s TBrun
IBM Telelogic Rhapsody
Adacore/Praxis alliance
Parasoft Embedded’s C++test
Parasoft Embedded policy management
Protecode

Leave a Reply

featured blogs
Nov 24, 2020
In our last Knowledge Booster Blog , we introduced you to some tips and tricks for the optimal use of the Virtuoso ADE Product Suite . W e are now happy to present you with some further news from our... [[ Click on the title to access the full blog on the Cadence Community s...
Nov 23, 2020
It'€™s been a long time since I performed Karnaugh map minimizations by hand. As a result, on my first pass, I missed a couple of obvious optimizations....
Nov 23, 2020
Readers of the Samtec blog know we are always talking about next-gen speed. Current channels rates are running at 56 Gbps PAM4. However, system designers are starting to look at 112 Gbps PAM4 data rates. Intuition would say that bleeding edge data rates like 112 Gbps PAM4 onl...
Nov 20, 2020
[From the last episode: We looked at neuromorphic machine learning, which is intended to act more like the brain does.] Our last topic to cover on learning (ML) is about training. We talked about supervised learning, which means we'€™re training a model based on a bunch of ...

featured video

Introduction to the fundamental technologies of power density

Sponsored by Texas Instruments

The need for power density is clear, but what are the critical components that enable higher power density? In this overview video, we will provide a deeper understanding of the fundamental principles of high-power-density designs, and demonstrate how partnering with TI, and our advanced technological capabilities can help improve your efforts to achieve those high-power-density figures.

featured paper

Top 9 design questions about digital isolators

Sponsored by Texas Instruments

Looking for more information about digital isolators? We’re here to help. Based on TI E2E™ support forum feedback, we compiled a list of the most frequently asked questions about digital isolator design challenges. This article covers questions such as, “What is the logic state of a digital isolator with no input signal?”, and “Can you leave unused channel pins on a digital isolator floating?”

Click here to download the whitepaper

Featured Chalk Talk

Cloud Computing for Electronic Design (Are We There Yet?)

Sponsored by Cadence Design Systems

When your project is at crunch time, a shortage of server capacity can bring your schedule to a crawl. But, the rest of the year, having a bunch of extra servers sitting around idle can be extremely expensive. Cloud-based EDA lets you have exactly the compute resources you need, when you need them. In this episode of Chalk Talk, Amelia Dalton chats with Craig Johnson of Cadence Design Systems about Cadence’s cloud-based EDA solutions.

More information about the Cadence Cloud Portfolio