The evolution of SCA: Taking it back!

The evolution of SCA: Taking it back!

on Feb 13, 14 • by Roy Sarkar • with No Comments

In this continuing series (read part one here, part two here) about the history and evolution of static code analysis (SCA), we’re exploring the progression of tools and methodology over the past few decades. Originally built by developers to help reduce bugs in software, SCA now drives an entire industry that’s...

Home » General Industry, Static Analysis » The evolution of SCA: Taking it back!

In this continuing series (read part one here, part two here) about the history and evolution of static code analysis (SCA), we’re exploring the progression of tools and methodology over the past few decades. Originally built by developers to help reduce bugs in software, SCA now drives an entire industry that’s focused on helping teams develop better, more secure code that meets the expectations of users. The SCA tools of today cover larger code bases and find more problems than ever before but, up until recently, they’ve fallen outside the control of developers. With analysis in the hands of centralized test and QA teams, problems are being found later, at a greater cost to fix, and thrown over a wall between testers and developers.

What if it were possible to give back control to the developers while also retaining the powerful omniscience of centralized analysis? There is a proven way to do it, so read on to find out how it works…

The Next Generation

Moving the power of static code analysis back to the developer means two things:

1. Developers see problems in their own environment, allowing immediate fixes

2. The analysis includes code from the entire project, to account for control and data interactions between all parts of the system

This perfect storm of capability means the analysis engine must be aware of issues that are caused by code throughout the system and be able to show results at the developer’s desktop.

Consider the example from our last blog post where we had two functions that interacted with each other:

  int foo(int x, int y, int* ptr) {
    if( ( x & 1 ) & !(y & 1) )
      return *ptr = 42;
    return -1;
  }

  int bar(int x) {
    int temp;
    return x < 10 ? foo(x, 32, NULL) : foo(x, 33, &temp);
  }

If these functions were created by two different developers, a local analysis run on either developer’s machine wouldn’t be able to verify whether the *ptr assignment in function foo would work or not. To overcome this problem, most tools today would use a centralized build analysis, so any potential issues would be found after both sets of code are checked in. While covering as much code as possible is good, the trade-off is that problems are more expensive to fix later (for many reasons, not the least of which is now you have two developers involved in the fix rather than one).

A different approach is to design the analysis engine such that it runs with a distributed system context, allowing both developers to perform accurate, local analysis that accounts for code written by the other person. This “connected desktop” environment integrates each local analysis engine with a central server that manages a map of all function and class method behaviors across the entire system. This map is always available to every local developer’s analysis and generates results that are as accurate as a centralized analysis. It’s the best of both worlds and yet it’s also faster and more efficient since the analysis runs on local builds and the developer is right there to fix any problems.

Distributed system context helps understand what’s going on throughout the project’s source code, but there’s also the case of libraries that have already been built. Developers often make calls to functions that have been built by someone else and these functions are essentially “black holes” as far as traditional control and data flow analysis are concerned. To accommodate this, an analysis architecture that aggregates the code analysis results of all developers, before it’s built, and provides them to the local engines is able to dive into problems caused by libraries. This “peer-to-peer context” gives every individual the results of the group, offering a complete system view without the need for a complete source analysis.

Both distributed system context and peer-to-peer analysis bring the global awareness of centralized SCA down to the developer’s desktop. This brings all the benefits of SCA to a point in the software development life cycle that’s earlier than ever before, making defects faster to find and cheaper to fix.

Now we’re back to having SCA at the desktop. Given the tools out there, you may think that this is the earliest possible point to find and fix problems but...there is one other way to squeeze the power in even earlier. Join us next time to learn how this is possible and to learn more about the tool making it happen.

How are SCA tools making your life easier or more difficult? Let’s hear your thoughts in the comments below!

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

Scroll to top