Our partners at Code Integrity have a good blog that touches on many of the benefits and barriers to static analysis within a development organization. They have an interesting post on “False false positives” – a great phrase that captures one of the key challenges in developer adoption of the technology.
While increased sophistication means that static analysis tools can catch more problems with a higher degree of accuracy, the burden increases on the reviewer of the results to interpret them correctly. If you were grep’ing through some code for something you can quickly review (and dismiss) many of the results because you understand what your “analysis” is doing. With static source code analysis, this is much less apparent.
We see many engineers look at a complex bug report and not take the necessary time to understand the problem and fix it. This is mostly because they don’t understand what the static analysis tool is doing and how deep it is analyzing the code. The result is a real bug being marked as a false positive – or a “false false positive” if you will. These bugs then disappear off the queue never to be seen again – a lost opportunity.
One of their key recommendations to overcoming this barrier is using training and joint review of results to educate developers on why the tool is flagging a potential error, what the mitigation options are, etc. Code Integrity has a bunch of deployment and training services to help customers with these types of deployment hurdles.
In our experience, all developers need is one ‘aha’ moment where the tool finds a nasty, subtle bug that would be hard to find using any other method. Once that happens, the developer is a convert. I would also say the burden isn’t just on training, but the tool vendors as well. We all have to continue making the usability of the tool such that developers should be able to instantly recognize why the tool is flagging the error and give the developer all the info they need to recognize the bug and take the appropriate action.