Software security

Automotive security flaws pt. 4 – improper input validation & info exposure

on Dec 2, 15 • by Jeff Hildreth • with No Comments

Over 20 percent of automotive cybersecurity vulnerabilities this year have been caused by improper input validation and information exposure. Find out how you can address these problems now...

Home » Software Security » Automotive security flaws pt. 4 – improper input validation & info exposure

Over the last few weeks we’ve been discussing the top ten automotive cybersecurity vulnerabilities of 2015. Today we’ll cover improper input validation and information exposure.

#4 Improper input validation (43:53)

This vulnerability includes getting incorrect or missing information from anything that could possibly affect a program’s control flow or data flow. This can include things like:

• Improper pathname limitations
• Improper pathname equivalence resolution
• External control of configuration settings
• Improper neutralization (command injection, SQL injection, cross-site scripting, etc.)
• Missing XML validation
• Improper log neutralization
• Improper restriction to bounds of memory buffer
• Improper array index validation
• Copy into buffer without size check
• Improper null termination

Basically anywhere that you are getting information from the outside world could be included in this vulnerability.

Example of improper input validation (44:36)

SQLite example fail

Our example of improper input validation occurs in SQLite, a package that’s used in a lot of embedded systems. SQLite does not properly implement comparison operators, which allows context-dependent attackers to cause a denial of service or possibly have unspecified other impacts via a crafted CHECK clause. This example is a little complicated and maybe a little paranoid, but still something that needs to be taken seriously.

At 45 minutes and 35 seconds, we examine an example of SQLite failing. As you can see in the code, the fail here is that we’re assigning a couple of flags that undue actions taken by the routine. The overall problem here is that the wrong path through the code can end up resetting those flags and making the memory appear as if it is dynamically allocated when it wasn’t, or vice versa.

This is obviously bad because an attacker could use this vulnerability to get information that they aren’t supposed to have access to, or cause memory to free that wasn’t supposed to be freed.

Fixing the SQLite vulnerability(46:32)

SQLite example fix

The fix for this vulnerability is a little bit complicated, but basically what needs to be done is that the areas that have the flag need to save off the important bits to direct memory every time. This will ensure that when we do start playing around with multiple flag values that we aren’t allowed to specific bits.

However, the flag in our example is actually really prone to usage errors and could use a redesign to ensure that it can perform safely.

Often time’s improper input validation errors can be fixed through correct architecture and design, proper implementation, and by utilizing automated static analysis.

When designing code be sure to check data on both client and server sides of transaction – don’t assume that the server side is only passing along information that you intended, actually make sure that this is the case.

Static code analysis tools are very, very good at finding unclean, unquoted, or unescaped data, so be sure to use a SCA tool when designing code.

#3 Information exposure (48:49)

Information exposure is intentional or unintentional disclosure of information to an actor that isn’t explicitly authorized. This information exposure tends to happen through:

• Sent data
• Data queries
• Discrepancy
• Error messages
• Debug messages
• Process environment
• Caching
• Indexing private data

Programmers tend to forget about information exposure through error messages and debug messages, as well as any information that may find its way into a log file. You need to be mindful of everything that lives in your log file, and be confident that you don’t have any information sitting there that may be of value to an attacker.

Example of information exposure (49:51)

Information exposure: remediation

Our example of Adobe AIR is an interesting example of why you shouldn’t just give information away. Adobe Flash Player wasn’t properly restricting discovery of memory addresses, which would allow an attacker to bypass the ASLR protection mechanism through unspecified vectors. This makes the process of saving memory to random locations a waste because attackers are able to find all the memory.

Though this may seem like a little issue, it’s important to remember that most attackers need more than one vulnerability to make an impact, so you really need to be careful about what memory you are releasing so attackers aren’t able to use that information in addition with other information or another vulnerability.

We don’t have Adobe’s code so we can’t explore the actual fix, but at 51 minutes and 25 seconds we dive into ways to make sure that information exposures don’t happen.

By compartmentalizing systems with safe areas, by not allowing sensitive data to leave trust boundaries, and by exercising caution when interfacing outside of these trust boundaries, you can make sure you are securely designing your code to protect from these information exposures.

You should also use code analysis to secure you information. Automated static code analysis can perform context-dependent weakness analysis, dynamic code analysis can perform fuzz testing and allows you to test within a monitored virtual environment. Manual code analysis is still important so you ensure that there are no areas of external contact for unintended data access and that there is no information shared unless it is absolutely necessary. (Yes, this includes error and debug messages.)

Conclusion

We will wrap up our series on the top 10 security vulnerabilities just before the holidays, be sure to check back.

In the meantime:

• Hear Larry Ponemon from the Ponemon Institute analyze the findings from a recent cybersecurity survey of automakers and tier one suppliers.
• Read the article “Software Tools: Pays to be paranoid” in Vehicle Electronics by Steve Howard, a software quality and security consultant at Rogue Wave.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Scroll to top