The evolution of SCA: Out of developers’ hands and into the fire

The evolution of SCA: Out of developers’ hands and into the fire

on Feb 5, 14 • by Roy Sarkar • with No Comments

In this continuing series (read part one here) about the history and evolution of static code analysis (SCA), we’re exploring the progression of tools and methodology over the past few decades. Originally built by developers to help reduce bugs in software, SCA now drives an entire industry that’s focused on...

Home » General Industry, Static Analysis » The evolution of SCA: Out of developers’ hands and into the fire

In this continuing series (read part one here) about the history and evolution of static code analysis (SCA), we’re exploring the progression of tools and methodology over the past few decades. Originally built by developers to help reduce bugs in software, SCA now drives an entire industry that’s focused on helping teams develop better, more secure code that meets the expectations of users. Not too long ago, in the mid-90s, SCA tools were struggling to overcome limitations in how they were designed and trying to find a market among developers who didn’t trust their results. They achieved some success but it came at the cost of everything that made SCA so powerful in the beginning. Here’s what happened…

The Middle Years

Early SCA tools were designed to find common problems that occurred within a single source file. Issues like uninitialized variables, NULL pointer usage, and other annoying errors that were easy to introduce and even easier to overlook. The limitation with these tools was that they operated on one file at a time which, when faced with growing software complexity and team sizes, became prohibitively expensive to use and missed many critical defects. Defects that were only caused by control or data passing between multiple software units and were often impossible to find through manual testing. Not to mention the fact that these defects were the most costly to find and fix later.

A different approach to the analysis was needed, one that opened up all the code on a project to scrutiny. The answer was to move the analysis away from individual developers and into the integration builds. By scrutinizing a larger code base, tools could determine complex control and data interactions between procedures and discover more potential issues. Consider this example:

  int foo(int x, int y, int* ptr) {
    if( ( x & 1 ) & !(y & 1) )
      return *ptr = 42;
    return -1;
  }

int bar(int x) { int temp; return x < 10 ? foo(x, 32, NULL) : foo(x, 33, &temp); }

The only way to know whether the *ptr assignment in function foo would work or not, is by having the analysis engine investigate all values and ranges of variables through all possible logic combinations in all functions. These combinations could exist within a single file or within different build units and be the result of more than one developer’s code.

With this growth in software complexity, techniques had to be found to expand the analysis state space while minimizing false positives (not to mention keeping performance and resource usage in check). Rather than simply parse the code in a file, SCA tools had to build abstractions of the entire code base to model all possible control and data flows in the system. As the tools got better and faster at discovering issues, the industry placed greater value on static code analysis. The technology was real and finding defects earlier meant real costs were being saved.

Unfortunately, these great technical advancements left one major philosophy behind: the analysis was no longer in the hands of the developers. More than a philosophy, in fact, if the analysis is run in a centralized build, it meant it was being delayed to later in the lifecycle, resulting in a greater cost to fix defects. As well, defect detection and reporting became the job of someone else (such as the QA or integration teams) and developers lost ownership of their fixes and became more like reluctant suppliers to some other party. It became clear that the people who benefited most from static code analysis were now being isolated from it.

This was a period where people lost sight of the true value of static code analysis in favor of advancing technology. It fell under the control of test and QA teams rather than being an integral part of the developer's workflow. Real value could come from not only increasing the fidelity of the analysis but also understanding the importance of who was performing it. Remember, SCA was originally built by developers to find bugs as early as possible in the life cycle, making them easier and cheaper to fix.

Next time, we’ll see how this philosophy is emerging in the next generation of SCA tools that are leading the charge to cover more types of issues across more environments while bringing the analysis power back into the developer’s hands.

Do you see any limitations in how you’re using static code analysis now? Let’s discuss your experiences in the comments below!

Read part three here.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Scroll to top