The news about the Volkswagen emissions defeat is a surprise for many, scary for some, and the same, familiar story for anyone in software development. At least, from a technical standpoint. Over the next few weeks, we’ll see the financial and reputation fallout (VW’s market value has already plunged more than 20%), perhaps even some change in the way automakers are regulated and do engineering work, but developers will approach this from a different perspective – how smart was the testing (or lack of testing) behind this?
A bit of background: it’s alleged that Volkswagen deployed “cheat” code in the emissions control systems of several models of diesel vehicles between 2009 and 2015, code whose sole purpose was to lower nitrogen oxide emissions during EPA testing and let emissions exceed allowable levels during normal use. Using a combination of inputs, such as steering angle and speed, the cheat code was able to identify when the vehicle was undergoing testing and adjust emissions accordingly. Business and consumer impacts aside, this example raises an extremely challenging aspect of software testing.
How important is independence in testing?
We’ve all been taught that you should never stop at testing your own code, you should get someone else to do it too. That way, you get an independent set of ideas, experiences, and knowledge exercising your code in ways that you never thought of, or are too deep in the weeds to realize. For mission-critical systems, that independence is taken further, with verification authorities from outside your organization providing their sets of eyes on your code, often proving or disproving compliance to known requirements and standards. But, as this emissions cheat code shows, it’s still possible (and evidently, quite easy), to defeat our accepted practices.
It comes down to questions of, are we testing for the right things and are we testing all the possibilities we should be? In Volkswagen’s case, they presumably were testing for the right things and all the possibilities necessary to successfully deliver cars to consumers. In the EPA’s case, it seems that they were testing for the right things in principle (emissions levels) but perhaps not in practice across all possible areas of non-compliance, including software. And this isn’t the first time, as other manufacturers in years past have passed emissions testing using cheat code. If nothing else, this news item should spark some serious discussions on how deep compliance testing should go. Automobile manufacturers are already very familiar with software standards that protect people (MISRA, ISO 26262), so perhaps it’s time to look at extending those standards to protecting the environment as well.
We’re moving rapidly towards automobiles that are driven entirely by software and these twists are just the beginning of journey. If most people wouldn’t have thought that a small bit of code could have a direct impact on the environment now, imagine their surprise once we tackle these same issues and more with all-electric and self-driving cars.
• Watch this webinar to learn three ways to deliver secure, compliant, defect-free automotive software
• Learn five ways to protect your software supply chain from hacks, quacks, and wrecks on SlideShare