Two months ago, I thought reaching 1.0 was one of the toughest things a software project could do. On your left is a pile of bug reports, on your right a pile of feature requests, and in the middle there's a calendar screaming "Release, already!"
You have to release software. Spending your entire life working towards the mythically perfect 1.0 is a conceit only the more naïve open source projects can sustain, and even then it's annoying as hell. So in the end you're brutal. You throw all the remaining feature requests into the next version, then go through each bug and ask two questions:
- How serious is this, really?
- How hard is this going to be to fix?
Measuring how serious a bug is isn't as clear as you'd think. A minor niggling problem can be more serious than a crashing bug if the minor problem annoys everyone, while the crashing bug only hits one or two people. That said, I generally order bugs from 1 to 4, as follows:
- Crashes (in web-app terms, a 500 error)
- Behaves incorrectly (put good data in, get bad data back)
- Behaves sub-optimally
- Cosmetic problems
(Update: Data-loss bugs are probably even more important than crashing bugs, and security flaws shouldn't even stay un-fixed long enough to make it onto the list.)
You want to fix everything in all four categories. You can't. Even if you had exactly enough time to fix all your existing bugs, you'll run into the problem that with every bug you fix, you increase the likelihood of introducing a new bug, possibly one higher up the list than you started.
The biggest risks are bugs in category 3 or 4. Fixes that affect performance or workflow are dangerous because of the significant likelihood of introducing incorrect behaviour in the process. Rigorous testing helps, but not as much as you'd think because often components will have been written to be used in a particular way. Tests are naturally biased towards the way the developer was thinking when they wrote the code. Change the workflow or introduce a cache, and suddenly you're using the code in a way the original designer didn't think of (or they'd have done it that way in the first place), and thus didn't test as thoroughly for.
Even a simple thing like a cosmetic fix can be a risk: foo needs to link to bar, but the developer adding the link is rushed and does it the old way, which you'd stopped doing because it failed under certain circumstances.
So obviously, on the road to 1.0, your risk/benefit analysis is weighted highly against fixing sub-optimal behaviour or cosmetic problems.
This is why armchair user-interface criticism annoys me.
Certain forms of UI criticism are valuable. While I don't wholly agree with it, John Siracusa's critique of the OS X Finder is a considered review that tackles the philosophy of the application's design from the ground up, questioning the basic assumptions the application made about how it was going to be used. Similarly, it's usually obvious when an application received no interface attention whatsoever, and was just a bunch of form fields slapped one after the other by a blind programmer1.
However, random pundits looking through an application or suite of apps and pointing out all the places that the UI guidelines weren't strictly adhered to, or all the places that the buttons could perhaps be better arranged, is just the practice of finding all those category 3 and 4 bugs that didn't make the cut. By all means, file bug reports. Bug reports on UI issues are incredibly useful. But don't slap them all together in a web page, however entertaining, as if they somehow prove that the interface as a whole is slipshod2.
Classic Mac users moving to OS X are a particularly annoying example. The original Mac OS was tiny: the Finder was about the same size as the ls
binary on my Linux box. A marvel of engineering, indeed, but when you're writing something to be that small, you can't afford inconsistency, because inconsistency means more code, and suddenly you can't fit in ROM any more. Subsequent versions of the Classic OS were layered on top of this core, piece by piece, over a period of fifteen years.
Of course an OS or application designed this way will end up more internally consistent and more polished than an OS that had to spring from its father's head fully-born and monolithic. It should come as no surprise that each version they polish things a bit more, but they don't get all the problems, because there's a simple commercial reality that says you have to release something that's imperfect in order to pay for perfecting it. Which isn't saying you should release something that's crap: you shouldn't. It's just to say that you shouldn't expect there not to be significant (but lessening with each subsequent version) room for improvement.
Anyway, as I was saying in the first paragraph, I used to think that the hardest thing a software project had to do was make the painful cut of features and bugs for 1.0. It turns out I was wrong. The hardest thing to do is, in fact, to make the cut for 1.1. The moment you release 1.0, you start getting these incredible things called users, who find all those bugs you never turned up during development, and who make really cool suggestions for things you could add.
And you've got those piles of issues in front of you again, and the calendar is looking threatening.
1 My design skills suck3. I understand the principles of good UI design quite well, but I have always deferred to someone else to do the laying out of stuff because I know in advance that their eye is better than mine.
2 I'm not talking about criticism of any product I've been involved with here: this is a general gripe, not a personal one.
3 Although I'm quite happy how this weblog turned out.