feature article
Subscribe Now

Taming C?

There is a problem with the C programming language. It is too flexible. While it is this flexibility that makes it attractive to many users, it can also introduce serious problems. And the flexibility is a direct result of its history.

As you will remember, C was developed at Bell Labs in the early 1970s. (It was called C because it was to be an improvement on the B programming language.) Intended to be a machine-independent replacement for assembly language, it gradually built a reputation as an extremely useful general purpose programming language, in an era when the computing spectrum was clogged with hundreds of special purpose or wannabe general purpose languages. C’s strength was demonstrated when UNIX, also a Bell Labs product, was re-written in C. The first landmark was the publication of Kernigan and Ritchie’s C Programming Language in 1978, and in the 1980s there was a proliferation of C compilers for every platform, from supercomputers down to the emerging desktop personal computers. And, of course, programmers being very clever and creative people, each compiler had its own little features, idiosyncrasies and quirks. This meant that C programmes that exploited a specific compiler’s features might fail on a different compiler. Or, what was worse, would apparently compile successfully but behave differently and in unpredictable ways. So in 1983 ANSI (American National Standards Institute) began over five years of work to produce a single C standard, (ANSI X3.159-1989) which a year later was ratified by ISO (International Organization for Standardization) as ISO/IEC 9899:1990.

There is a definition that a camel is a horse designed by a committee: this is unfair on camels, as they are very useful when you want to cross a desert. ANSI C was, in some ways, much worse than a camel. Obviously, the people who formed the working party were people who had knowledge of C compilers. Equally predictably, many of those were convinced that their own compiler’s special extension/feature/quirk had its place in the final standard. And many did indeed find their way there. Further issues were that the language definition in the standard was not rigorous, formally defined nor internally consistent. Apart from that it was fine!

At the beginning of the 1990s there were other concerns being aired about the safety aspects of unrestrained C use, and MISRA was set up in Britain. “MISRA, The Motor Industry Software Reliability Association, is a collaboration between vehicle manufacturers, component suppliers and engineering consultancies which seeks to promote best practice in developing safety-related electronic systems in road vehicles and other embedded systems.”

An early focus was on a “restricted subset of a standardized structured language”, and MISRA C was published in 1994 as “Guidelines for the use of the C language in vehicle based software.” While this was a considerable success, feedback from users made it clear that there were areas that could be further improved. These improvements were implemented in MISRA-C:2004 and included

  • Ensuring that the language used is consistent with the standard language
  • Replacing generalized rules for Undefined Behaviour with specific rules targeted at Undefined Behaviour only
  • Ensuring “one rule, one issue”; i.e. complex rules are split into atomic rules for ease of compliance
  • Adding to and improving the code examples
  • Removing the option for tool-less use.

It was also renamed “Guidelines for the use of the C language in critical systems” to reflect the widespread adoption of MISRA outside the automotive field. A fuller exploration of MISRA C and the recently published MISRA C++ will form a later article, but one example of a MISRA rule and its evolution from 1994 to 2004 may give the flavour. In 1994 it was said:

Advisory: Provision should be made for appropriate run-time checking.

This was changed in 2004 to:

Required: Minimisation of run-time failures must be ensured by the use of at least one of (a) static analysis tools/techniques; (b) dynamic analysis tools/techniques; (c) explicit coding of checks to handle run-time faults.

And that brings us to the second half of this piece – static code analysis. Static code analysis tools, sometimes called linting tools, have been around for a long time – in its recent release of PC-lint version 9.0, Gimpel claimed PC-lint for C/C++, the longest continuously advertised software tool in human history, was first introduced in 1985.” And, as we will see later, there are tools with roots that are even older. There is a stack of tools out there, and they range in price from free to extremely expensive.

So why should you want to use a linting tool? Surely the fact that a program has compiled is all that is necessary? And if C were a well-defined language, that might be a good and safe viewpoint. But as we discussed earlier, it is not well-defined, and programmes that have happily compiled can still create application-wrecking bugs. If you are lucky, you can catch these during the integration and test phases. But we all know about bugs that have survived the full development process and only rear their heads when the product is in service. Sometimes these are merely irritating to the user (like the set-top boxes that need rebooting at least once a week). In other cases, bugs can be so severe that the product has to be withdrawn, at considerable damage to the manufacturer’s reputation.

There are significant economic values to catching bugs early in the development process. The cost of fixing a bug increases the later in the development cycle that it is discovered. Depending on whose figures you believe, and there are a lot of studies in this area, the cost of fixing a bug at system test can be ten or even more times the cost of not letting it get to compilation.

But if you still need convincing that compilers are not enough, Green Hills Software has introduced and is actively marketing DoubleCheck, a static code analyser built into the company’s C/C++ compiler.

To quote from the data-sheet, DoubleCheck:

determines potential execution paths through code, including paths into and across subroutine calls, and how the values of program objects (such as standalone variables or fields within aggregates) could change across these paths.

DoubleCheck looks for many types of flaws, including:

  • Potential NULL pointer dereferences
  • Access beyond an allocated area (e.g. array or dynamically allocated buffer); otherwise known as a buffer overflow
  • Potential writes to read-only memory
  • Reads of potentially uninitialized objects
  • Resource leaks (e.g. memory leaks and file descriptor leaks)
  • Use of memory that has already been deallocated
  • Out of scope memory usage (e.g. returning the address of an automatic variable from a subroutine)
  • Failure to set a return value from a subroutine
  • Buffer and array underflows

The analyzer understands the behavior of many standard runtime library functions. For example it knows that subroutines like free should be passed pointers to memory allocated by subroutines like malloc. The analyzer uses this information to detect errors in code that calls or uses the result of a call to these functions.

This description can broadly describe most of the other static code analysis tools in the market. One of the examples that Green Hills uses is the Apache web server. Using DoubleCheck for an analysis of Version 2.2.3 — which apparently has 200,000 lines of code, 80,000 individual executable statements and 2,000 functions — revealed, among other serious issues, multiple examples of NULL pointer dereferences. This is caused when memory allocation subroutines are followed by one or more accesses of the returned pointer without checking whether the allocation had failed and returned a NULL pointer, something that Green Hills says is “all but guaranteed to cause a fatal crash.”

So, now you are convinced, what is stopping you from using one? You are in good company if you don’t. Green Hills (that is the last time I will mention them) says that a survey at this year’s embedded world conference and exhibition revealed that only around 5% of programmers were using static code analysis or similar tools.

The barriers to use are multiple. Firstly, there is the cultural issue of the programmer who knows that he (usually it is a he who thinks this way) is writing good code and doesn’t need the hassle of using these tools.

Moving on to more logical issues, some of the tools are just hard to use. They produce a zillion error messages, sometimes with no indication of the relative importance of any specific message. The vendors explain that their tools can be “trained” to match your coding style and that, quite quickly, both you and the tool will know how to work together. But often users, particularly reluctant users, conclude that it is just too much hassle to use the tool. This also leads to another issue: since running code through the analyser produces difficulties, analysis is left until the code is “complete,” which can mean very long analysis runs and even bigger lists of error messages. The ideal is that the code is checked every time changes are made, giving short analysis times and, more importantly, short error lists.

Klocwork’s Insight tool can be set up so that the individual developer automatically invokes an analysis when returning code to a central library. Klocwork also claims to be able to carry out analysis of the entire software system, looking at all elements before compilation and integration, removing still more possible causes of errors.

Two British companies have been pioneers in static code analysis. The founders of LDRA probably invented the idea around 1975, and the company has been providing tools ever since. (The list of languages that they have supported is like a potted history of the programming world.) The company has evolved beyond static analysis and offers analysis tools for all stages of the software development cycle, giving knowledge of not just the code but how well it implements the design and provides metrics of quality and how it is changing over time.

PRQA (Programming Research) is, by contrast, a relatively young 22 years old and has also built on static analysis to provide a range of tools which, the company claims, are aimed at preventing errors as well as identifying issues, explaining how they arose and suggesting how they can be resolved.

Both companies urge the use of coding standards, such as MISRA, to improve the quality of the original code, and analysis tools to confirm that the standards are being followed and to make it easier to locate and remove violations.

Their customer base, like that of competitors Klocwork, Coverity and Polyspace (from MathWorks), is rooted firmly in those industries that place a premium on safety and reliability, such as the military, aerospace, and automotive. Clearly, if you have to certify that your code is safe, then you use all available means to make sure it is so.

What we have with static code analysis is an extremely powerful way of identifying code constructs and other problems that would show up as bugs and removing them early in the development cycle. This would seem to be sensible.

However, as things stand today, with the vast majority of companies that develop embedded systems reluctant to invest in software tools, the available tools are not being used. There is also a problem that many of the tools available are regarded by users, sometimes legitimately, as being cumbersome and not easy to use. So, sadly, outside those industries where customer requirements place a premium on safety and reliability, static code analysis tools, like many others, still have a long way to go before becoming a standard element in the design flow.

Leave a Reply

featured blogs
Nov 5, 2024
Learn about Works With Virtual, the premiere IoT developer event. Specifically for IoT developers, Silicon Labs will help you to accelerate IoT development. ...
Nov 7, 2024
I don't know about you, but I would LOVE to build one of those rock, paper, scissors-playing robots....

featured video

Introducing FPGAi – Innovations Unlocked by AI-enabled FPGAs

Sponsored by Intel

Altera Innovators Day presentation by Ilya Ganusov showing the advantages of FPGAs for implementing AI-based Systems. See additional videos on AI and other Altera Innovators Day in Altera’s YouTube channel playlists.

Learn more about FPGAs for Artificial Intelligence here

featured chalk talk

From Sensor to Cloud:A Digi/SparkFun Solution
In this episode of Chalk Talk, Amelia Dalton, Mark Grierson from Digi, and Rob Reynolds from SparkFun Electronics explore how Digi and SparkFun electronics are working together to make cellular connected IoT design easier than ever before. They investigate the benefits that the Digi Remote Manager® brings to IoT design, the details of the SparkFun Digi XBee Development Kit, and how you can get started using a SparkFun Board for XBee for your next design.
May 21, 2024
37,623 views