All static analysis tools are not created equal

on Mar 8, 11 • by Brendan Harrison • with No Comments

Yes, it’s true (!) and as anyone in this space knows there is a huge difference between static analysis tools, their level of sophistication, and their approach to developer adoption. Gary McGraw & John Steven from Cigital describe their views on this topic including ‘5 pitfalls’ that customers should avoid...

Home » Static Analysis » All static analysis tools are not created equal


Static Analysis Tools Not Created Equal

Static Analysis Tools Not Created Equal

Yes, it’s true (!) and as anyone in this space knows there is a huge difference between static analysis tools, their level of sophistication, and their approach to developer adoption. Gary McGraw & John Steven from Cigital describe their views on this topic including ‘5 pitfalls’ that customers should avoid when evaluating tools. These pitfalls mostly amount to the fact that analysis results across different tools, code bases, and tool operators can make results vary significantly, so be aware of this fact when conducting your benchmarking. Their overall recommendation:

“The upshot? Use your own code instead of a pre-fab evaluation suite. You probably have the makings of a good set of tests within your own organization’s application base….”

I agree with this and can honestly say we rarely, if ever, run into evaluations where customers exclusively use pre-fab test suites instead of their own code for many of the reasons outlined in their article. So, I’d say the market is (and has been for some time) embracing this recommendation wholeheartedly. So, beyond this recommendation, what else should customers consider when evaluating these tools? Here are a couple other significant areas to consider where you’ll find that, yes, all tools aren’t created equal.

  • Environment support. In particular in the embedded software space, considerations such as integration with your build environments, compiler support, ability to work with multiple software branches, are all crucial considerations for a successful deployment. Not all tools have good support in these areas, but these capabilities can often make or break a deployment.
  • Developer adoption. This is everything frankly, and a big part of achieving developer adoption is the quality of the analysis issues raised in the article. Obviously, a tool that generates accurate, useful results will get you well on your way to strong developer adoption, but that’s not everything. How are the defects described to developers, including the trace info? Do developers want to run their own desktop static analysis rather than fetching results periodically from the integration build? If so, how smart is the vendor’s desktop analysis?

So basically, picking a tool boils down to assessing: quality & flexibility of the analysis, support for your dev environment (not just the one you’re using in the eval!), and thinking ahead to developer adoption issues. Assess these three areas thoroughly and you’ll end-up picking the tool that’s right for your needs.


Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top