In light of the continuous string of high-profile data breaches and countless problems arising from the Heartbleed OpenSSL code error, organizations across sectors are seeking ways to improve their coding processes to ensure the security of their networks, software products and services. According to a recent article from Computing, the open source community's failure to address the Heartbleed bug has proven that programmers can never be too stringent with their code review measures, as even a small error can slip past millions of peer developers and cause a range of issues across the world of IT. The source pointed to an interview with German programmer Robin Seggelmann, cited as being responsible for the bug.
"I wrote the code and missed the necessary validation by an oversight," Seggelmann told The Guardian, according to Computing. "Unfortunately, this mistake also slipped through the review process and therefore made its way into the released version."
Setting new review standards
Even though open source coding platforms offer peer review measures to ensure best practices when it comes to developing new software, programmers must be extra cautious now that this method has been proven fallible. Seggelmann reportedly said that while open source is still a viable and important tool to use in a variety of contexts, there is no substitute for a deeper roster of personnel working on individual projects. Computing pointed out that since large companies such as Google, Cisco, BlackBerry and Juniper Networks all depend on open source models to develop and test their software, the Heartbleed incident should be a warning to all organizations and a hint that code review processes may need to be revised before it is too late.
"It has been said that 90 percent of websites are using this OpenSSL code but very few are contributing," Peter Pizzutillo, director of product marketing at software quality analysis firm CAST, explained to Computing. "The open source communities aren't as deep and robust as they should be, there are pockets of passionate developers out there so it is hard to fault them … the model only works if the takers are giving back on the code."
Security controls may need a reboot
Developers have the tendency to become overly content with their code once it passes the initial review process, not actively assessing its security measures unless forced to by compliance standards. According to Computing, this passive mindset has been the underlying cause of a lot of the faulty source code that has led to problems over the past few years. Ian Glover, president of CREST, a not-for-profit information security assurance organization explained to the source that large firms have a responsibility to the rest of the developer community that they fail to uphold when errors like this get through the cracks.
"I don't care if it's 'shrink-wrapped' or open source; firstly, it should have been developed correctly, and then tested by the organization that uses it, even if it is of low value to them. If it is critical to the business then that needs even more stringent testing," Glover told the source. "It's going to take retrospective action on websites for a long time because of bad code that's been there for many years that shouldn't have been there in the first place, and that's just dreadful."
Smaller, more frequent check-ins key
As regulatory organizations seek a way to encourage better coding standards, Dr. Dobbs recently suggested that programmers should focus on improving their own practices by building large collections of check-ins to meticulously track the development of code. With stricter review measures in place, peer review teams can spot issues before they turn into larger problems.