“Multi-function, cross-linked infotainment systems and the associated in-car electronics
are a growing reliability plague for many brands.”
– Consumer Reports 2014 Annual Auto Reliability Survey
Given the increase in code complexity, connectivity, and the chase for more features, it’s no surprise that defects in automotive software have become the norm. Whether it’s something simple as a malfunctioning radio or more alarming such as an engine stall, consumers, manufacturers, and researchers are struggling to figure out how to build more reliable software. In the past four years alone, we’ve seen the following major news items about issues with automotive software:
• In March 2014, a reported software bug could lead to front passenger airbags not being deployed
• In 2013, a combination of issues within system components and software could cause the unexpected application of brakes, without illuminating the brake lights
• In 2011, quality perception among buyers dropped dramatically due to software glitches with a manufacturer’s in-vehicle entertainment system, prompting an unprecedented distribution of 250 000 flash drives to update customer vehicles
• in 2010, software was blamed for causing a one-second lag in the application of brakes, allowing the vehicle to travel further than intended before the brakes took hold
In 2013, more cars were recalled than were sold, by nearly 45 percent. A lot of it has to do with finances – manufacturers think it’s more cost effective to react to software issues after the vehicle has been delivered rather than attempting to prevent recalls at all costs during development. A big reason for this is that manufacturers haven’t caught up to the latest techniques in automated software testing, which make verification and validation far more cost effective than even ten years ago.
Analysis vs. paralysis
The days of manual software testing are long gone and we’re in an era of fast, efficient automated testing. This includes verification by analysis, or checking the behavior of applications through automated software techniques. A popular type of analysis is static code analysis (SCA), a method of analyzing code without executing the application. The automotive industry has been using SCA for quite some time but most organizations are stuck using it in the traditional way – finding simple and annoying bugs within a small set of files (or even file by file) so that developers don’t have to worry about them. Modern SCA can do so much more.
The SCA of today finds programmatic, memory, security, and standards defects in code across the entire development team by including inter-procedural control and data flow dependencies between code on developers’ desktops. This extends the scope of automated testing by orders of magnitude, not only finding defects in one file but finding defects that are the result of a combination of files touched by a combination of developers.
A further benefit of modern SCA is “on-the-fly” analysis at the desktop. Taking a cue from IDEs and harnessing the processing power of today’s computers, static code analysis can run in real-time, finding issues as code is being written. This on-the-fly analysis is similar to the live compilers found in today’s IDEs, giving instant and continuous feedback on defects being introduced into the code. This screenshot of Klocwork illustrates how this works within Microsoft Visual Studio (click to enlarge).
Presenting analysis results this way gives developers two things they’ve never had before: rapid identification of issues in the same environment that they’re coding in and discovery of complex issues that arise from interactions with code in other parts of the system. The benefit of this is that issues are found at the earliest possible point, as code is being written, making them easier to understand and faster to fix.