August 2006

« July 2006 | Main Index | Archives | September 2006 »

20
Aug

One problem when small companies are swallowed up by big companies is that sometimes the big company brings with it, for want of a better term, ethical baggage.

Case in point: Flickr. When I bought a Flickr Pro account a year ago, they'd recently been bought by Yahoo!, but I still felt like my money was going to help the cool, fun people who ran the site. As time progressed, though, I became less sanguine about handing money over to their parent company, whose Chinese arm shops dissident journalists to the secret police, and whose executive attitude to the matter could possibly do with some rethinking.

So I drew up plans to step quietly back from Flickr. I'd already paid for a year, so there was no real value for anyone in making any immedate move. When my account was coming up for expiry I'd extract my photos from the site, find some convenient way to host them myself, and feel that warm glow of low barrier-to-entry middle-class social consciousness.

Of course, this required a certain amount of planning and preparation, and relied on me noticing when there was only a few weeks of subscription left. Today I logged on to Flickr, noticed that my account had already expired, and quickly ponied up the $25 for another year.

I suck.

We've rented an apartment in San Francisco, so visiting Sydney employees have somewhere to stay. According to this warning on the front door, there's something inside, somewhere that will give us cancer and cause our children to be born with three heads.

The sign directs us to a helpful website that promises to explain why the building comes with such a dire warning.

Except it doesn't. The site in question just rattles off a laundry-list of possible culprits, ranging from the everyday ("Any time organic matter such as gas, charcoal or wood is burned, Proposition 65-listed chemicals are released into the air"), to the concerning ("Construction materials... contain chemicals, such as formaldehyde resin, asbestos, arsenic, cadmium and creosote.")

So yeah. Either I'm breathing asbestos and guzzling arsenic every day, or some of the apartments have gas stoves. Thank you very much, Proposition 65.

The vagueness of the sign, plus the fact that you can find the same words on half the buildings in the city, makes it entirely useless for any purpose beyond:

  1. Covering the State's ass in case of litigation
  2. Cultivating a vague feeling of helplessness and dread

An effective warning sign must:

  1. Effectively communicate any immediate danger
  2. Give the reader enough information to evaluate the risk, and determine how to mitigate that risk

The first is important because with some dangers, by the time you've taken the time to evaluate the risk, it's too late. If someone's about to walk over a cliff, you might only have time to yell "STOP!"

The second is important because shouting "STOP!" only lasts a moment. Once you've got someone to stop, you have to tell them why. Otherwise they're either just going to shrug and step forward again (and probably not listen to you the next time you scream a warning at them), or they're going to decide that the risk of cancer is too great and never step inside a building in California.

With that in mind, let's look at two recent well-publicised computer security issues, and ask the question: "Were these effective warning signs?"

1. The MacBook WiFi Hack

In an article entitled Hijacking a Macbook in 60 Seconds, Washington Post reporter Brian Krebs (whose work has has featured in this blog before) presents a video of hacker David Maynor demonstrating a remote root access exploit on a Macbook notebook through a WiFi device driver. Krebs explains that this is in fact a general, cross-platform vulnerability in device drivers that affects multiple WiFi vendors across Windows, Mac OS X and Linux.

Did this warning effectively communicate the immediate danger? Well, no. Instead of choosing a headline and angle that accurately reflects the message they're trying to communicate -- that there is a real danger to using WiFi on any platform -- the article instead screams "MacBook Hacked!"

This immediately muddies the water, because instead of communicating an important warning, you've instead prompted a large segment of Windows users to say "Yeah, take that you arrogant Mac bastards", and an equally large segment of Mac users to say "Hey! This is bullshit!" Any real point is lost in the ensuing partisan bloodshed.

Did the warning communicate enough information for third parties to evaluate the risk they were facing? No again. There was such a huge disparity between the exploit that was claimed -- a general vulnerability across platforms and vendors -- and the exploit that was demonstrated -- a single hack against an incredibly unlikely combination of hardware -- that readers were left puzzled as to what was actually being demonstrated at all?

Why demonstrate the hack against a third-party WiFi card, when nobody would ever use such a card with the Airport-enabled MacBook? If the exploit were truly cross-platform, why not take the additional sixty seconds to demonstrate the inverse, and show the MacBook hacking into the Dell? A couple of spectacularly unconvincing explanations were offered, and now even SecureWorks, Maynor's employer, prefaces the video with a disclaimer, warning that the exploit being demonstrated differs significantly from the exploit being claimed:

This video presentation at Black Hat demonstrates vulnerabilities found in wireless device drivers. Although an Apple MacBook was used as the demo platform, it was exploited through a third-party wireless device driver - not the original wireless device driver that ships with the MacBook. As part of a responsible disclosure policy, we are not disclosing the name of the third-party wireless device driver until a patch is available.

The practical upshot of this particular security warning? Two weeks later I was at WWDC, Apple's developer conference. Present were thousands of Macintosh laptops, including my own, all logging on to the conference's open WiFi with a carefree abandon.

2. The Ruby on Rails Vulnerability

Last week, word got around the blogosphere: Ruby on Rails 1.1.0 through 1.1.4 were vulnerable to a potentially nasty security exploit, and everyone was recommended to upgrade to version 1.1.5 immediately. More information would be revealed as soon as everyone had a chance to upgrade.

This warning at least satisfied the first criterion. It effectively communicated the immediate danger, and gave you something you could do Right Now to prevent it. Unfortunately:

  1. It was the wrong fix: the next day, everyone who had rushed to upgrade to 1.1.5 was told to upgrade again to 1.1.6.
  2. Because of the lack of disclosure, the simpler workaround (adding a simple apache rewrite rule) was not made public at the time of the first announcement, so anyone who was not willing (or able) to upgrade their production Rails installations without any testing or burn-in time was left out in the cold
  3. When the vulnerability was finally disclosed, it was introduced with the phrase "The cat is out of the bag", suggesting that someone outside the core Rails camp had already done the obvious, and worked out what the flaw was by examining what changed between 1.1.4 and 1.1.5, something that any potential bad guy could have done just as easily.

Often, full disclosure is explained as a way to make sure vendors are responsive, using "naming and shaming" to force a faster patch schedule. This is certainly one aspect of the practice, but far more important is the fact that it gives those people who might be running the vulnerable software enough information to make informed decisions about their security.

Just in front of us in the crowd at yesterday's baseball game were dozen or so young lads and lasses, all dressed in green t-shirts with "Add me" on the front, and their myspace IDs on the back. Well, all except this guy:

“I deleted my myspace”

I originally wrote this as a Bugtraq post, in response to the question: "XSS [Cross-site scripting] bugs are easy to discover and easy to fix, so what's the problem?" Unfortunately, it seems not to have made it through moderation, so I'll preserve my words of wisdom here for posterity:

XSS bugs are common because most web development environments make it far too easy to introduce XSS bugs.

The prevalence of any bug is directly proportional to how much extra work a programmer must do to avoid introducing it. Writing software is 90% attention to detail, but programmers are human and the more details there are to pay attention to, the more likely one will be missed.

You get buffer overruns in C code because the programmer is required to guard against them, not so much of a problem in Perl. SQL injection is more common in PHP (string concatenation and explicit quoting) than it is in Java (PreparedStatement and automatically-quoted argument insertion, or O/R mapping).

XSS bugs happen because in most web dev environments, the default is to not escape entities when writing variable data into HTML. If escaping were the default, and developers had to specifically un-escape variable data that was supposed to be HTML (and the parameter to do so were called something like "unsafe") the incidence of XSS would drop by orders of magnitude. It would also become much easier to audit unsafe usages in such apps.

Of course, the problem with systemic solutions is that they introduce the danger of systemic exploits: if a loophole is found in environment X's escaping, then temporarily every application written in X is vulnerable. On the other hand, it means that once the loophole is fixed at the source, that fix propagates out without all developers having to learn about the new boundary condition they need to guard against.