Not Seeing the Forest for the Gotos

by Charles Miller on February 26, 2014

Almost every report on the recent Apple SSL security bug has focused on the code. On the failure of developers to notice the pernicious extra goto statement.. On the way it could have been picked up by code review, or static analysis, or (my favourite) by making sure you put braces around one-line conditional branches.

Just as much has been made of the almost-too-coincidental fact that within a month of the bug shipping to the public, Apple was added to the NSA's PRISM hitlist of vendors subject to "data collection".

I'm not a conspiracy theorist. Here’s how I am 95% sure they did it, because it's too obvious for them not to be doing it.

Somewhere, in a boring lab in a boring building, an overworked government employee has the job of running a mundane (hopefully automated) test suite against every new release of an OS or web browser. The test suite tries to fool the browser with a collection of malformed or mis-signed SSL certificates and invalid handshakes, and rings a triumphant bell when one is mistakenly accepted as valid.

Focusing on goto or braces, misses the point. There are an uncountable number of ways a bug like this could end up in a codebase. It's not even the first, or even the worst example of an SSL certificate validation bug: back in 2002 an issue was discovered in Internet Explorer (and also, to be fair, KDE) that meant 90% of web users would accept a trivially forged certificate.

The Apple SSL bug existed, and remained undetected for a year and a half, because Apple wasn't testing their SSL implementation against dodgy handshakes. And it made us unsafe because the NSA, presumably alongside an unknown number of other individuals and organisations, government and otherwise, were.

It's a depressingly common blind spot for software developers. We’ve become much better over the years at verifying that our software works for positive assertions (All my valid certificates are accepted! Ship it!), but we're still depressingly bad at testing outside the “happy path”.

What we call hacking is a form of outsourced QA. Hackers understand the potential failure modes of systems that can lead to a compromises of integrity, availability or confidentiality, and doggedly test for those failures. Sometimes they succeed because the systems are incredibly complex and the way to exploit the failure incredibly obscure, and there's just more people with more time to look at the problem from outside than from within.

Far more often, they succeed because nobody else was looking in the first place.

Previously: Happy New Year!

Next: It's the Upload Speed