I see signs that the “penny has finally dropped” and true continuous integration (CI) is steadily making its way into embedded software development. Yes, OK, I know there are pockets of wizened DevOps teams out there, largely in the telecom space, which do understand and indeed practice true CI. However, I’m not yet convinced that it’s the norm when embedded is concerned, and certainly not in the safety and mission-critical systems arena where I’ve spent my career.
My opinion was typified by a LinkedIn discussion I was privy to earlier this year when it was suggested that most static code analysis (SCA) tools already support CI because they have a command-line interface, presumably based on the assumption that they can therefore be called from Jenkins and similar tools. Alarmingly, this view that putting a job in Jenkins, or any other build management system, and automatically having a nice new and shiny CI process is not uncommon in some walks of software development. My Rogue Wave Software colleagues that work with the Zend products – where they work with developers creating large-scale web-based systems, where CI really matured – would be horrified. (I will come back to this…)
Let’s step back for a moment and reflect.
CI embodies the concept that by checking the impact of changes more frequently – ideally on each individual change – we’re able to quickly identify the cause of any problems and remediate as soon as possible with no nasty surprises downstream. CI gives us greater understanding, earlier, so we can then act upon that knowledge as we see fit.
For my Zend colleagues working with rapid deployment web technologies, the idea of extending CI to cover continuous delivery seems natural – this is the next step in the cycle that introduces risk of failure or delays. By contrast, for we embedded systems developers, and especially those of us in safety or mission-critical systems, the potential to derail a development process often comes from analysis, testing, verification and validation, and quality and assurance. So for us, the logical next steps are continuous analysis, continuous testing, and continuous compliance. Why not? Addressing the risks early, and dealing with it right away.
This brings me back to the cause of my outburst – what makes an SCA tool truly work in a CI process?
Yes, having some way to call the SCA tool in question from your build management system is somewhat a prerequisite, but does that really make a tool CI compatible? Let’s assume I have 10,000 existing issues (not uncommon), we don’t want each check-in to render a new copy of this list with one or two additions, hidden by the fog of war; so does the analysis tool just report what we really care about – what changed? Does the tool in question behave incrementally? Does it report back its findings in a timeframe that would make it relevant before the next inevitable addition to the code-stream? Is the report accessible and useful from the environment in which it’s been created – the CI build management platform? If the answer to any of those is no then I would argue the tool is probably not going to support a true CI process.