Patch, Rinse, Repeat.

by Charles Miller on March 15, 2007

So a year ago, David Maynor and Jon Ellch demonstrated to the Washington Post that they could "Hijack a MacBook in 60 seconds or less". Some people called shenanigans, first citing the differences between the claimed vulnerability and the demonstrated exploit, and later finding evidence that the demonstration may have been entirely manufactured. Many people dismissed these rebuttals as "Macintosh zealots" who "refused to admit their shiny boxes could have a security flaw."

A year later, Maynor attempted to clear his name by demonstrating that he could, in fact, exploit a WiFi vulnerability that was fixed between OS X 10.4.6 and 10.4.. Some people have called shenanigans, pointing out that, with a year to refine his exploit, and a patch from Apple to examine, Maynor has gone from demonstrating a hijack to just triggering a kernel panic. Such people are, of course, obviously biased Macintosh zealots.

Enough of that for now. Moving on, we can vastly over-simplify computer security into three groups of people.:

  1. People who build secure systems
  2. People who publish flaws in secure systems
  3. People who exploit flaws in secure systems

The people in group 1 do most of the important work. The people in group 2 get most of the attention. The people in group 3 do most of the damage.

Once again, a vastly over-simplified categorisation. But useful nonetheless.

At the moment, the primary currency of the second group is credit. You gain value as a 'security researcher' based on the potential impact of the flaws you are credited for discovering. There is an implied "you scratch my back, I scratch yours" agreement between vendors and the exploit-research community. Vendors will release timely patches and carefully lend credit in in press-releases, release-notes and advisories to anyone who reports a security bug to them. In exchange the community will pat such vendors on the back on mailing-lists, and continue to give them advance notice of flaws in the future.

(This is the crux of Maynor's accusations of Apple: they fixed a flaw in their WiFi drivers without giving him the credit he felt was due to him for discovering it. Without a great stretch of the imagination, one could link the perception that Apple weren't playing fair with 'the way things are done in the vulnerability disclosure business' to January's disclosure-without-prior-notification month of Apple bugs.)

Being named in a high-profile advisory gives the researcher kudos amongst his peers, gets him invited to speak at conferences, and gives amateurs a leg-up into paid work. It's their equivalent of publishing an academic paper. Exploit or perish.

Theres also a certain amount of geek wish-fulfilment involved, of course. All the jargon and silly names makes the whole thing look like some bizarre role-playing game taken into real life, where you score experience points for publishing a damning advisory, hopefully to help you level up in the security community.

(Although the most egregious "information technology as a role-playing game" I've encountered was back when I used to browse the Usenet net-abuse newsgroups. After a few days watching pseudonymous vigilantes bicker about who could claim the "kill" for having a spammer's account revoked, I was sorely tempted to post "How about you both get XP for the orc, but NightBringer the Mighty gets the +1 dagger.")

The problem with exploit-discovery, though, is that for the most part it's not nearly as exciting as it's made out to be. The vast majority of exploits belong to a small set of common flaws that developers were just too lazy (I mean, "busy with more important things") to prevent. Finding them is an exercise in "volunteer QA". Firewall pioneer Marcus Ranum put it this way last February on the firewall-wizards mailing-list.

A skilled attacker is someone who has internalized a set of failure analysis of past failures, and can forward-project those failures (using imagination) and hypothesize instances of those failures into the future. Put concretely - a skilled attacker understands that there are buffer overruns, and has a good grasp of where they usually occur, and laboriously examines software to see if the usual bugs are in the usual places. This is a process that, if the code was developed under a design discipline, would be replaced trivially with a process of code-review and unit testing (a little design modularization wouldn't hurt, either!).

But it's not actually rocket science or even interesting. What's so skilled about sitting with some commercial app and single-stepping until you get to a place where it does network I/O, then reviewing the surrounding code to see if there's a memory size error? (Hi, David!) Maybe YOU think that's security wizardry but, to me, that's the most boring clunch-work on earth. It's only interesting because right now there's a shockingly huge amount of bad code being sold and the target space for the "hit space bar all night, find a bug, and pimp a vulnerability" crowd to play with.

The discover/patch/fix exploit cycle does not, on its own, make us significantly safer. The reason there is such an industry in the first place is that the software we run on a daily basis is riddled with holes, the result of an industry where the weight of demand and speed of passing trends dictates quickly producing software that is 'good enough'. Gumming up one hole in a colander is just one less hole.

The people who build secure systems are the ones who make us safer. They're the ones who, instead of saying "Buffer overruns are a problem, so we should examine existing software for buffer overruns", say "buffer overruns are a problem, so we should stop teaching students that strcat() even exists" (or if that doesn't work, stop teaching them that C exists). They're the ones who didn't care about the SQL Slammer worm, not because they rushed to patch their servers the moment the exploit was discovered, but because their network was far too smart to deliver untrusted UDP traffic to (or from) a database box in the first place.

(As an aside: How do you tell if your network administrator is conscientious? Pay attention to what he blocks from going out of his network, not just what he stops coming in.)

The irony, though, is that if it weren't for the frequency and publicity of vulnerability disclosures, few people would bother to listen to advice on building more secure systems. Before the advent of full disclosure, there was barely sufficient incentive for vendors to patch known bugs, let alone fundamentally change their development practices so as to stop introducing new ones.

While the patching of a single vulnerability barely makes us safer, there's value in the publicity the community seeks in finding them. The whole funny role-playing game keeps security in the public eye, and keeps the people who sign the cheques aware that it's worthwhile to expend effort moving, painful inch by painful inch, towards an industry that isn't just an endless progression of patches.

(Or not. Ranum maintains the whole thing is just a distraction from real security, not a spur towards it.)

Previously: Sandwich Provider Protocol

Next: Apple TV Review