August 2003

« July 2003 | Main Index | Archives | September 2003 »

I have come to the conclusion over the years, that the worst possible way to run any kind of volunteer1 online community is through democracy. Democracy is a high-overhead compromise that rarely works in the small- to medium- purpose-oriented communities that tend to arise online. And yet, people keep trying it.

Constitutional Crisis

I have been involved in several online communities and once you try to start solving issues with rules rather than dialog, the problem snowballs. Arguements(sic) about new rules, interpretation of rules, past rule violations soon become a major topic for the group. There are also people that like to break the rules just because they are there. If there are no rules to argue about or break, most issues get resolved by peer pressure or the powers-that-be. --"Michael Pusateri (Argyle)":http://joi.ito.com/joiwiki/IrcChannel#head-c4576b5bb334240aab6ea3136382e695ba42c9a2

Democracy is based on the theory that power is bestowed on the government by the people being governed. As such, a democracy needs a Constitution that defines how people are elected, what positions people are elected to, and what power is bestowed upon them.

If there is no explicit constitution, one is implied by the mere act of voting: an elected official is, by definition a representative, and those people who voted will feel that implies a duty to the voters, even when the extent of that duty is different in the mind of each person who casts a ballot.

A democratic society becomes a society of rules. The biggest implicit assumption of a democracy is that the elected officials must represent the will of the people who elected them, and must do so in a transparent, accountable fashion. This means codifying the will of the people into explicit rules, rules that then also bind the rule-makers.

This creates a massive administrative overhead. Any system of rules must be interpreted, must have its edge-cases argued and adjudicated, gives rise to a system of precedents. The assumptions behind the rules must be examined. As the quote says, arguments about the rules themselves become a significant factor in running the community.

Chattering Classes

In a democracy, the elected officials are beholden to their constituents. As such, it is the right, nay the duty of the constituents to let their elected official know exactly what they think of any issue in front of the government of the day. Every individual has a personal stake in how things are being run, even when the issues are trivial. And nobody's voice can be dismissed, because that would disenfranchise them.

This leads to a lot of fruitless talk that could otherwise be avoided.

No Real Authority

Most "power" online is an illusion. In any volunteer community, it is impossible to assign tasks. People will do what they want to do. You have no whip to drive them, and no carrot to attract them beyond the joy of accomplishing something.

Open Source knows this. Open Source succeeds when somebody has an itch to scratch that improves the software (or when some financial incentive is provided from outside to do the boring stuff), and fails when nobody finds any of the problems interesting enough to tackle.

A temptation, then, is to combine power and responsibility. Give somebody a title and nominal authority over others, in exchange for doing some job that they wouldn't otherwise want to do. This leads to people volunteering for the sake of volunteering rather than because they wanted to do that particular thing. The thing itself lies un-done, or done in the slipshod manner of somebody realising they've been conned by a promise of illusiary power.

Even worse, someone who might actually be able to do the job better, and might be really enthusiasic about doing it, can't because they can't get elected, and it's now about power instead of just about contributing.

Electioneering

If online communities have to be governed, they are best governed with light touches: the strong hand inside the velvet glove. Most of such communities are communities of purpose, where everyone wants to achieve the same end, but may differ in the means they wish to get there. Communities are best coordinated and cultivated, rather than ruled.

This leads people who wish to be elected to a position within that community with a problem of choosing a campaign pitch:

  1. "Elect me because I have a long record of doing neat stuff" is a weak argument because it doesn't say what more you would do if you were elected.
  2. "Elect me because I will do these things that don't involve the authority bestowed by the election" begs the question of why aren't you doing them now?
  3. "Elect me and I will maintain the status quo" doesn't differentiate you from anyone else
  4. "Elect me and I will exercise my authority and change things" seems to be the best campaign pitch.

Actually, most campaign pitches end up being a combination of (2) and (4), with the knowledge that most of the things under (2) won't ever really be done. They're election promises after all.

The promises to exercise authority and change things end up meaning, you guessed it, more arguing over rules, changes to rules and interpretation of rules.

The Adversarial System

By definition, somebody wins an election, and everyone else loses. This leads to the community being stuck permanently in an adversarial system. Maintaining the community becomes a competition, rather than cooperation. I can think of no better way to bring out everyone's personal conflicts than to make them run against each other in elections.

Solutions?

When composing an online community, avoid democracy like the plague. It should be considered the last-ditch attempt to run a community, when the alternative is it falling apart because nobody can get along and reach the compromises necessary for its day-to-day running. And really, if nobody can get along that well, isn't the community better off breaking up into smaller units that can then achieve their cross purposes separately?

1 by which I mean any community that is not beholden to commercial interests. Commercial interests change all the rules.

In honour of the comments thread of Hani's recent Javablogs post, I would like to declare Monday, September 1st to be "Post pictures of your cat to Javablogs" day.

Everyone is invited, on Monday, to post pictures of your cat in whatever category will end up syndicated on Javablogs. If you don't have a cat, find one.

Stand up for your inaliable right to post pictures of your (or some complete stranger's) cat, wherever you damn well please!

W32.Sobig.F could have done some serious damage to the Internet. It's easy to imagine how much worse it could have been if, say, the virus had a remote-administration/DDOS component.

You can blame Microsoft, of course. Or you can blame the victims who still don't know that they shouldn't open attachments. Or you can declare that email itself is broken and we need to replace it with something more secure. (More on that last one tomorrow, I think). Or you can blame the worm authors for being not-very-nice people. Or you can shrug and say "Well, it wasn't that bad, was it? Just delete the damn emails."

It's lazy to blame Microsoft. Certainly, Microsoft's Operating Systems have the worst practical security of the major consumer OS's1. The thing is, though, the difference is really only marginal. It may be slightly easier to compromise a Windows user, but if some other OS had 95% market share, the Black Hats would just make that extra few percent of effort to achieve the same ends.

There are a few simple things that OS vendors should pay more attention to. Specifically, more attention needs to be paid to making computers more secure in the default configuration. A simple example is the way MSBlaster spread. Why were DCOM services being offered over the Internet in the first place? Because it's easier to bind a service to * than to specific addresses, one suspects.

The biggest problem, however, lies in the security model of consumer operating systems. The model has remained unchanged since 1970's Unix, and has not adapted to today's atmosphere of naive administrators and Internet-borne threats.

Modern OS's are based on the age-old multi-user security model, which aims to do two things:

  • protect users from non-users (i.e. attackers)
  • protect users from each other

On most desktops, the second is rarely used: there is one user, or there are a small number of users who trust each other. Unix and Mac OS X are better at splitting user privileges from system privileges, NT is (from experience with NT4 and W2K) annoying for a user/owner not to have Administrator rights on all the time, although that may have changed with XP.

Java's security model has been criticised over the years, but mostly because of flaws that have been found in its implementation. The theory behind the model was sound, and it added another dimension to the security matrix:

  • protect users from the code they run

This is what no operating system does, and what every operating system should do in today's world of fast-spreading worms, dangerous malware and non-technical users. The assumption of the OS security model is that all actions a user takes should be considered equal, and the user's authority is delegated infinitely and unchecked through software. This is the deadly assumption that causes almost all malware to spread. We should not assume that the user trusts the software he or she is running.

Simple example. There is almost no situation I can imagine where an application launched from Outlook should be permitted to modify the Windows Registry. And yet they can, because a user is permitted to change the Registry, and Outlook delegates that power unthinkingly to anything the user decides to run. And yet, if applications launched from attachments were not allowed to modify the Registry, were not permitted to talk to the network, were not given access to the filesystem, you'd have effectively killed email-borne worms.

Java had to leap through all sorts of hoops to get its security model working--managed code and class-file validation--because the virtual machine didn't have full control over the real machine. The OS controls the horizontal and the vertical. What it says you can't do, you can't do. And it could make those decisions based on application identity (or a stack of such identities and inherited capabilities) as easily as it can now based on user identity.

There are complexities: the component model of modern operating systems means we must deal with the question of the 'taint' of data transferred between components, or of applications saved to disk and then run elsewhere. But these are all solveable problems. Properly implemented, this model would massively increase the security of our desktop systems, without placing a significant useability barrier in front of the user, or limiting what they can do if they really want to.

The big question, though, is one of motive. Microsoft's biggest challenge with every OS update is to convince people that the new model is worth buying: that it does something you couldn't do before. Increased security means, by definition, that a computer will do less than it did before. Sure, they're all things you wouldn't want it to do in the first place, but selling the absence of something bad is not nearly as easy as selling the presence of something good.

Windows 3.11 came packaged with anti-virus software, but that was left out of Windows 95. Microsoft have been building all sorts of things into their OS: web browsing, instant messaging, email, multimedia playing. One would think that virus protection and a firewall of the same sort of feature-set as ZoneAlarm would be far more obvious contenders to be a part of the OS than an IM program, and that Symantec would be shivering in their boots at the thought of their market being dragged from under them.

It won't happen, though. IE, MSN Messenger and Media Player are all visible, additive features. Virus protection and firewalling are not only subtractive, but they offer no cross-platform advantage to competitors in the way Real, Netscape or AOL threatened. Hence, Microsoft are quite happy to let someone else handle that, thank-you very much.

Which is why the direction Microsoft are taking is not into the realm of increased practical security for users, but towards the DRM PC, a tightly managed OS that increases the security of the computer at the devastating cost of the freedom of the user: but with the benefit of providing a path through which newly available DRM-protected content becomes the positive feature that will be used to sell it.

1 NT's permissions model is good, pretty much everything else is rubbish.

Well, for the last week or so, some kind of misconfiguration between my ISP (Telstra) and my hosting provider (AVS) has made accessing my weblog from Telstra (and thus updating it) almost impossible.

I suppose instead I should have written my posts offline, to be waiting when I regained access to my weblog. But part of the fun of blogging is instant-gratification publication. Without that, I really couldn't be bothered to write at all. I wrote this post in a short period where it seemed to be working reliably... and of course by the time I'd finished writing the first draft, that access had been lost again. But since I've actually written it this time, I'm going to persevere with the posting thing.

I've been on holiday most of this week anyway, hence the remarkable lack of updates. I just haven't engaged my brain sufficiently for the past five days to write a coherent post. Which is amusing because one of my plans for my week off work was to write a series of really interesting articles about secure coding. You can guess how quickly that idea went away when I started relaxing.

Thus far, this week, I have:

  • Watched Fellowship of the Ring on DVD
  • Watched The Two Towers on DVD
  • Written bits of an IRC bot in Ruby
  • Played with Ruby's SOAP4R module, which is pretty cool
  • Learned that the Google SOAP API is really quite easy and clear
  • Learned that the Amazon SOAP API is really confusingly documented
  • Helped Lonita learn how MT works
  • Tried to read Return of the King but the Tolkien's "I'm writing a legend, not a story" prose style really doesn't work for me at all. I don't think I've ever read all of this book. Valiantly slogged through 175 pages of it anyway
  • Re-read Chuck Palahniuk's Survivor instead
  • Played pool and got drunk
  • Messed around with Reason, rather quickly ending up with with an embarrassingly bad pastiche/blatant rip-off of a NIN instrumental number
  • Ordered Keith Hillebrandt's Useful Noise, so that I can get even more pastiche-y, but probably no less embarrassingly bad
  • Wandered through three or four music shops in town, wishing I had a spare $10k or so to waste on synths and software that I wouldn't do anything worthwhile with (Mmmmm... Korg Mmmm... Logic)
  • Watched the Belvoir St Theatre production of Ionesco's Rhinoceros, which was really good
  • Deliberately avoided Java, and all discussions of it
  • Compiled this list

From a discussion on JWZ's blog, came the suggestion:

X-Sender: info@evite.com
From: <info@evite.com>

If that's in every message, try this in your .m4 or .mc file:

 LOCAL_RULESETS
 HX-Sender: $>Check_XSender
 D{MPat}info@evite.com
 D{MMsg}Spamming denied
 SCheck_XSender
 R${MPat} $* $#error $: 553 ${MMsg}
 RX-Sender: ${MPat} $* $#error $: 553 ${MMsg}

There are tabs betwen $* and $#.

And the follow-up:

<shodan> I'm kinda surprised that after all this time, sendmail's config is still based on ancient hieroglyphs
<shodan> . o O ( To enable sender-validation in sendmail, enable this option: bird, squiggley line, sideways man, fish )

Some people, when confronted with a problem, think "I know, I’ll use regular expressions." Now they have two problems.Jamie Zawinski in comp.lang.emacs.

Regular expressions are a very powerful tool. They're also very easy to mis-use. The biggest problem with regexps occurs when you attempt to use a series of regular expressions to substitute for an integrated parser.

I recently upgraded Movable Type, and in the process I installed Brad Choate's excellent MT-Textile plugin. MT-Textile gives you a vaguely wiki-like syntax for blog entries that rescues you from writing a mess of angle-brackets every time you want to write a post.

I love MT-Textile, but sadly the more comfortable I get with it, the more I realise its limitations. MT-Textile is built on top of a series of regular expressions, and as such, the more you try to combine different Textile markups, the more likely you are to confuse the parser and end up with something different to what you intended. Any parser built on top of multiple regular expressions gets confused very easily, depending on the order the regexps are applied in.

I ran into the same problem with I was running my own wiki. I started with a Perl wiki, which (like all Perl code) was highly dependent on regular expressions. I quickly found that the effort required to add new markup to the wiki, keeping in mind the way each regexp would interact with the previous and subsequent expressions, increased exponentially with the complexity of the expression language.

After a certain point, diminishing returns will kill you.

I'd like to propose the following rule:

Every regexp that you apply to a particular block of text reduces the applicability of regular expressions by an order of magnitude.

I won't pretend to be a great expert in writing parsers—I dropped out of University before we got to the compiler-design course—but after a point, multiple regular expressions will hurt you, and you're much better off rolling your own parser.

In a comment to my previous entry, Nils Kassube points out that the RedHat glibc upgrade issue that lost me about a day of productivity, increasing the cost of the off-the-shelf RedHat by at least an order of magnitude, is a known bug, and was actually reported in April. Essentially, RedHat's package management system will blissfully allow you to "upgrade" from the i686 version of glibc to the i386 version without warning, even though it's been known for four months that this will CFTS.

If you read the bug report, you'll find that they pretty much blame this on the user. "You should know better than to trust the package-management system to... you know... manage your packages!"

RedHat are quite obviously not ready to play with the adults yet. If you are considering RedHat Linux as a solution, walk away now.

At work, I'm leading the path towards a Microsoft-Free Desktop. What this really means is that I've rebelled against running Windows, and everyone else is watching me curiously to see if I explode.

At home I am, and have been some time, a very contented Debian user. Sure, they're slow with releases and always a few versions behind (although they're very quick with security updates) Sure if you come to Debian without knowing what you're doing, you might get a bit lost. Sure, my next personal machine will probably be an experiment with Gentoo. All that said, Debian has occupied that impressive position for years now, of being the distribution that works, and works smoothly. I find when I set up a Debian box, I only ever install the base distribution, because I know if I suddenly need something, I can type 'apt-get install something', and Debian will faithfully deal with all the dependencies, and make it so that something works in the shortest possible amount of time.

In the office, however, I have to run a bunch of IBM stuff, and officially IBM only support RedHat. I tried getting Websphere installed on a Debian box last year, and it turned into such a battle of incompatible libraries that I had to give up. So RedHat it is.

The problem is, moving from Debian to RedHat has a great deal in common with a lobotomy. When running a RedHat box, I always feel part of my brain is missing. It's the simple things: like the fact that I had to ssh to my Debian box at home to read the man-page for tcpdump, because the RedHat 8 RPM didn't include the manual. It's also the monumental things: like RPM.

With the RedHat Network, RedHat finally have an update distribution system that is almost, but not quite as good as Debian's years-old "apt". Of course, you have to pay for it. Debian is a volunteer project, so the people who put the packages together do it for free. RedHat is a commercial organisation, so they need to pay their packagers, and in turn that cost needs to be transferred back to us, the user. It also doesn't help that compared to apt/dselect, RHN is pretty clunky.

So today, on my newly minted copy of RedHat 9, I did what everyone should do when they first install a new Operating System: I went to the update site and grabbed all the updated RPMs. Checking the 'rpm' man-page, I discovered that the '--Freshen' flag would allow me to feed all the updates into the program, and have it only update those packages I already had installed.

Problem number one: some of the packages had been updated twice since the release of RH9, and both updates were in the update directory. Rather than do the intelligent thing, and just install the most recent update, RPM complained bitterly that I was doing such a terrible thing to it, spitting out a plume of warnings. Then it attempted to install both versions of each update in order, decided they conflicted, and died.

OK. Go through the directory, delete the dupes, and try again.

This time, it actually started updating the programs, starting with the base of the dependency tree: glibc. Except something in the libc update failed. By the time I looked back at the screen, everything that I tried to run was seg-faulting, including the setup scripts for all the subsequent RPMs. In short, my system was completely hosed.

When Windows 2000 did this sort of thing to me, I cursed it for days. I feel it would be unfair for RedHat to get any less a blast.

So here's to you, RedHat. You suck.

Update: I thought I'd done something wrong the first time. I figured it may have been because I was running rpm from 'sudo' instead of making sure I was logged in directly as root, with root's $PATH and so on. So when the system was reinstalled, I tried again more carefully.

Same result: one of the post-install scripts fails, and all attempts to run programs afterwards result in segfaults. There you have it. Updating RedHat 9's glibc using the RPM from RedHat's own update site hoses the system completely. I think the word I'm looking for here is "contempt".

Update 2: Lest anyone think I'm doing nothing but whining, this is now filed as RedHat Bug 102569.

Update 3: Apparently this is a known bug. It's been known since April Way to go, RedHat. Even Microsoft pull broken fixpacks after a couple of days.

Yes, I've momentarily jumped on the audio-blogging bandwagon. Fear.

Paul (no obvious surname) found himself with the problem: "I have class Foo, and I want to make sure that it is only ever instantiated by a particular Factory". He solved this problem using a nifty inner-class hack

Leaving aside the questionable aspects of this "class-and-factory" design, it's an interesting exercise in how sometimes we want to exert too much control over our code. This is how I'd solve the problem, if it were up to me:

class Foo {
    /**
     * This class should NEVER be instantiated by 
     * anything except the FooFactory!
     * 
     * @author Charles
     */
     Foo() {}
}

Anything more complicated than this is obfuscation. The constructor is package-private, and there is an implicit understanding that you don't call any non-public method on a class unless you understand exactly what you're doing. Methods are made non-public because they allow greater access to the object than is considered 'safe' for the rest of the world. You are being given permission to poke at the object's innards because by your position (coding in the same package as it) you are trusted to know what you are doing. And this at the very least means reading the Javadoc.

It's tempting to try to make it impossible for people to write bad code. It's also often a waste of time. It's OK for people to be able to write bad code in situations where they should know better. As such, making the constructor package-private and adding a comment is sufficient. Anyone working in the package should know better with that much signposting. Anything more is obfuscation.

Quick-Links is a way for me (and a few others) to post links to random stuff without the hassle of writing an entire post about them. The longer I've had this weblog, the more it seems to have migrated towards really long posts, at the expense of quick off-the-cuff links. Thanks to Mark Paschal's excellent instructions and a bit of Ruby hacking, I can redress that balance.

The only theme for these links is that they are things that the contributors happened to find interesting enough to point to. As such, and given that they're not going to hold any original content, I'm not going to tempt anyone's wrath by syndicating them on Javablogs :)

Quick-Links has its own RSS 2.0 feed, and is also syndicated on livejournal.

Contributors:

carlfish is me, Charles Miller. Don't ask how the nickname came about.

lonita is Lonita Fraser, a good friend I've known from IRC pretty much forever, and on whom I am counting to lower the total-nerd quotient of the links.

alang is Alan Green, a cow-orker who sincerely wishes more people would code in Python.

In a comment to another blog, somebody who was either Fred Grott or a convincing impersonation thereof (complete with typos) said the following about my recent Marc Fleury-related post

The only problem with your post is that ..the fishbowl post author admitted in meial to me that he was posting due to emotions not facts..

I have never sent a single email (nor meial) to Fred. The only emails I have ever received from him are those that arrive automatically whenever somebody comments on my weblog. I have not said anything about "posting due to emotions not facts" to anyone about the above post. I was perfectly calm when I posted it, and a week later I stand by what I said. Whatever Fleury's technical achievements, his PR skills need serious work.

If Fred is the author of the above comment, I invite him to post said email (or meial), complete with full headers. If he is not the author of the above comment, I suggest he disown it as soon as possible and email me so that I can update this post to reflect that fact.

My history of public retractions should be enough to show that I am quite willing to admit my mistakes in my own weblog. If I change my mind about something I've posted, dear reader, I promise that you will hear it first in these pages.

Gosling on Java

  • 9:05 PM

James Gosling recently weighed into a discussion on Apple's java-dev mailing-list. This is a link to the original emails, and here are a few choice quotes:

On being accused of "not having much to do with Java these days"

Almost everything I write is in Java these days. I mostly work on things other than the compiler or the JDK release.

On being accused of writing Java "so marginally capable developers could get a job:"

This is so damned false I don't know where to begin. I designed Java so I could write more code in less time and have it be way more reliable. In the past I've wasted huge numbers of hours chasing down memory smashes and all the other time wasters that are so typical of what happens when writing C code. I wanted to spend time writing code, not debugging. Life is too short for debugging. All of those little "limitations" turn out to be things that make coding faster and debugging vanish.

...

One of the design principles behind Java is that I don't care much about how long it takes to slap together something that kinda works. The real measure is how long it takes to write something solid. Lots have studies have been done on developer productivity, and Java beats C and C++ by a factor of 2.

On using a Mac.

I use the MAC because it's a great platform. One of the nice things about developing in Java on the MAC is that you get to develop on a lovely machine, but you don't cut yourself off from deploying on other platforms. It's a fast and easy platform to develop on. Rock solid. I never reboot my machine... Really! Opening and closing the lid on a Powerbook actually works. The machine is up and running instantly when you open it up. No viruses. Great UI. All the Java tools work here: NetBeans and JEdit are the ones I use most. I tend to think of OSX and [should be "as?" --ed.] Linux with QA and Taste.

On the much-maligned (at least by me) object/primitive distinction.

Depends on your performance goals. Uniform type systems are easy if your performance goals aren't real strict. In the java case, I wanted to be able to compile "a=b+c" into one instruction on almost all architectures with a reasonable compiler. The closest thing I've seen to accomplishing this is "Self" which gets close, except that the compiler is very complex and expensive, and it doesn't get nearly all the cases. I haven't read any of the squeak papers, so I can't comment on it.

  • Are you running Microsoft Windows 2000 or XP?
  • Are you not completely up-to-date with the latest patches?
  • Are you connected to the Internet?

If you just answered 'yes' to all of the above, chances are you're already fucked.

Hani, and in follow-up Toby Hede have both had a go at the tendency for projects, especially open-source projects to reinvent the wheel, with the implicit assumption that wheel-reinvention is prima facie a bad thing.

Before you go on, I'd suggest reading Joel Spolsky's characteristically brilliant essay: In Defense of Not-Invented-Here Syndrome:

"Find the dependencies -- and eliminate them." When you're working on a really, really good team with great programmers, everybody else's code, frankly, is bug-infested garbage, and nobody else knows how to ship on time. When you're a cordon bleu chef and you need fresh lavender, you grow it yourself instead of buying it in the farmers' market, because sometimes they don't have fresh lavender or they have old lavender which they pass off as fresh.

Here are some situations in which you will want to reinvent the wheel.

  1. You want to avoid an external dependency. Every dependency you add makes your project just that little bit more complex, and that little bit harder for an end-user to get up and running. There is also an unavoidable mismatch between what you want external code to do and what it actually does that will have to be bridged over (and the bridge maintained across revisions of both codebases). Sometimes, you will decide that the effort to write something yourself is actually less than the effort of tracking and packaging someone else's code.
  2. You want your product to be better than what's available elsewhere. This is what Joel was getting at: sometimes you want to re-invent the wheel because your wheel has to stand out from the competition. Take WebWork for example. Why didn't they just use Struts?
  3. Self-improvement. You learn a great deal starting something from scratch. This is the most common reason for Open-Source wheel-reinvention. If you come in on an established project, you miss some of the most valuable experience. Is it a bad thing that these people aren't contributing to an existing project instead? No. Open Source is about scratching itches, after all, it's not about making the most efficient use of resources.
  4. You are following the Extreme Programming doctrine of You Aren't Gonna Need It. This is similar to the pattern that Toby describes in the above-linked article. Early on, a reused framework is too heavy-weight for what you need to do, and integrating it would slow down your release so you write something yourself instead. Over subsequent releases you find yourself refactoring towards something similar to the framework you originally rejected, but that doesn't make the original decision to avoid the framework a bad idea. After all, meeting those early deadlines is important. Think of it like the Concurrent Garbage Collector in Java. Overall it takes longer, but you consider that a worthwhile trade-off against the big block of delay that the alternative would cause.

Now I'm not saying you should always build your own. Reuse has its place as well. I'm just saying that reuse is not an absolute good. In some circumstances, you're just better off with your own wheels.

Mildly amusing:
A complete stranger accusing you of having “no balls” in a comment to your blog
Amusing:
That the comment was a perfect example of the sort of nonsense your post was being critical of in the first place
Just plain funny:
The next paragraph starts: “Don't want to be rude but...”

Update: In a nice display of synchronicity, I just noticed that Mark Pilgrim's diveintomark now sports the following button on its comment forms:

(Note to self: need comment permalinks)

If I were running the JBoss project, here is how I would have announced my reaction to the announcement of Apache's J2EE project, Geronimo:

JBoss welcomes the competition from Geronimo. We wish the Geronimo team luck: developing a J2EE implementation is a lot of hard work. Today, however, the only option for a stable, production-quality Open-Source enterprise Java server (albeit not yet J2EE certified) is JBoss.

Here is what Marc Fleury actually said, in the jboss-news email that arrived in my inbox yesterday:

First a bit of history. I offered EJBoss when it was 4 month old to Apache. The guys at Jakarta vote OK unanimously and their vote was overridden by Brian Behlendorf. The reason from behlendorf was that they 'were not the dust bin of open source projects'. I heard the Apache crowd got offended for me calling them "a bunch of fat ladies drinking tea" at a later date when they were running around telling us how to run our project. We had reports that this was the non-official reason for this "challenge". Challenge accepted. More seriously as we overtake them in corporate penetration and business model, I guess they are finally looking beyond the HTTPD C codebase and imitation is the sincerest form of flattery.

We are the real thing, all we have so far is talk and announcement, announcements are a dime a dozen. Apache code on this project has yet to be released and then production reached and then maturity bla bla bla.... [then some stuff about JBoss not being involved in the project]

Somebody put a gag on this guy please?

Update (2003-10-10): Amusing semi-related commentary from Nathalie Mason-Fleury.

I am afflicted with occasional bouts of insomnia. They are nothing serious, they just manifest as me finding myself at 1:30 in the morning thinking: "Wow. I'm not at all tired, am I" when I have to be at work the next day. This is one of those nights. If you are a cow-orker and reading this tomorrow morning, you may wish to wait until I have had a few cups of tea before approaching me.

I am not a morning person. I'm not even entirely convinced I'm a day person. I still miss working nights, even if that did entirely put paid to any chance I might have had of developing a social-life at university. I enjoyed being awake at 3am, and really liked having the days free to go out and get things done while the shops were open.

Years ago, when I worked tech-support (my first full-time job), there were three shifts. Two people worked 7am--4pm, one worked 10am--7pm and one worked 11pm--8pm. When I moved from tech-support to programming, it was expected that I would fall into a more traditional 9--6 schedule, but that never really eventuated. Instead I ended up turning up some time between 10am and 11am, and considering my official working day done nine hours after that.

Eventually, the three of us working in the web-hacking department were pointedly asked to come in earlier. We compromised, and set up a rota whereby at least one of us would be in by 9am on any particular day to be available to answer phones and technical questions, but that was our one concession to timeliness. We got our job done, we worked the requisite number of hours. So what.

That sort of thing is less applicable in my current job, where there's much more of a need for me to be there at the same time as everyone else, and everyone else seems to turn up at ungodly hours of the morning. It still has echoes, however, in the way I never quite manage to hit that elusive 9am.

Today I bought an uber-nifty Ericsson T610 phone. Like any certifiable hacker, one of the first things I did was look into the available options for programming it. It supports J2ME MIDP 1.0, which is cool, and it also supported this games-oriented SDK called Mophun. The latter looked like an interesting diversion, so I looked into it...

...only to discover that if you want to write an application and upload it to your own phone, you still have to submit your program to Mophun's certification process.

How completely fucking lame.

To get certified, you have to prove you're "serious about making your applications commercially available" (I'm not, I just want to hack nifty things together and maybe give them away if they're good).

I can see how certification might be necessary to have your application promoted through Mophun's distribution system, but preventing developers from "just hacking" is so incredibly counter-productive, and robs the platform of any chance to be vital and interesting.

There are few things more intimidating to the single, heterosexual male than clothes-shopping. I find that even walking into a clothes shop, with its bright lights and fashions I know nothing about is a chore that I can successfully put off for anything up to six months, or at least until all my existing wardrobe has faded, been eaten by moths, or fallen apart.

Case in point, I own one sweater. I bought it when I was in Santa Barbara visiting Danna, who while sadly not filling the role of girlfriend, at least performed the vital function of pointing out things that looked neat, and when I found a candidate, taking that important step of telling me if I looked stupid wearing it. I've been unable to work up the nerve to buy a sweater since1.

Now one could possibly describe Danna's own fashion sense as 'eclectic', but it's that "Do I Look Stupid?" test that I'm simply unable to perform on my own. As such, I tend to just pick out clothes that are unremarkable, similar to what I've always worn before, and, well... black. I always worry that I basically look like my mother dresses me (I probably do), but that would be unfair to my mother--she has remarkably good fashion sense, and I'd probably be better off if she did.

My only real concession to not wearing black is my penchant for purple shirts: a habit picked up from long association with Lonita and (again) Danna. This isn't really fashionable, but it does significantly increase the (already disturbing) number of people who look at me and immediately assume I'm gay. (Or at least, that I'm one of those unfortunate fashion-deficient gay men who really need a nice boyfriend to tell them what to wear).

Partly, I blame men's magazines. Women have thousands of publications they can get away with buying that spend some of their time examining in great detail what looks good or bad on both men and women. The only equivalent for men are magazines like FHM, which devote 90% of their copy to pictures of women in their underwear, and I'd thus feel embarrassed buying.

Maybe I'm being a neanderthal here, a historical throwback due to my life as a computer nerd. Maybe the modern-day Metrosexual man scoffs at my inability to work out if that jacket really makes me look like a prat or not. I doubt it, though. After all, the archetypal Metrosexual is David Beckham, and I bet Posh picks out his clothes.

Major shopping centres need to offer a Rent-A-Girlfriend service. They meet you at the door, act enthusiastic, drag you around the shops for the afternoon (frequently dragging you into womens' clothes shops and making you hang around while they "oooh" over tops, just to add that air of authenticity). Nothing sordid would be involved, they would just have to convince me that they cared enough to be giving me honest advice.

Until then, you'll find me here looking at my brand new purple shirt, and almost-but-not-quite black t-shirt.

1 Well, I came that close to purchasing one today, and it was even a colour other than black and quite (I believe) stylish. But the shop-assistant noticed there was a hole in it, they had no more in my size and after taking the plunge on it, I didn't have the courage remaining to find something else. D'oh.

Name: The Placebo

Context:

Some long-running processing is occurring in your program. You really have no idea how long this event is going to take, but you want to keep the user as happy as possible while it is running.

Forces:

  • Users are happier if they can see that something is happening
  • A progress-meter spinning in its 'indeterminate' state will placate the user for at most fifteen seconds, after which they will begin to mistrust it
  • Users do not expect progress-bars to progress evenly
  • Telling a user exactly what is happening to cause a delay is rarely helpful

Therefore:

Estimate how long your long-running process should take. Add a fudge-factor, just in case. Have a progress-bar that runs more-or-less on that time-table until it reaches around 90%. Unavoidably, if it takes longer than this you will have to stick at 90%, but by then the user probably won't cancel the action until at least twice the time you initially budgeted for the action to take.

Educate your help-desk as to the real meaning of "Well, it goes OK for a few minutes and then freezes when it's almost finished..." but ensure they don't tell the user what's really going on, on pain of death.

If there are identifiable milestones along the way, you can incorporate these milestones into your placebo to make it look more accurate.

Note: Users are used to progress-bars that accelerate and decelerate seemingly at random. It could be that a progress-bar displaying this behaviour is more likely to be believed than one that progresses smoothly.

Examples of Use:

Internet Explorer applies a variant of this pattern during DNS lookup and initial TCP/IP connection (the progress-bar creeps forward from time to time, even though no progress is actually being made). This is in direct (and I believe very deliberate) contrast to Netscape Navigator, which would spin its progress-bar in indeterminate mode until the connection was established.

Most GUI installers seem to implement this pattern as a matter of course.

In response to me complaining about the server this site was on being overloaded, my provider moved me to a new server. On the whole, this is a good thing. I'll notice a big difference, because it will no longer take fifteen seconds to rebuild an entry. Readers on the other hand, won't notice any change at all because the whole site is static pages anyway.

That is, if they can read it....

DNS is basically this big distributed cache, where the cache-entries get to determine how long they live. When changing the IP address of a host, you're supposed to progressively lower the TTL of that host's DNS entry before the change-date. That way, when the change occurs, the cached lifespan of the obselete entry will be really short, and not many people will be inconvenienced.

Unfortunately, I run my own DNS, and I wasn't told what the timetable for the IP address change was until after it happened.

So if you can't read this, it's because your DNS server still has the old address, and may very well hang on to it for a few more days.

Update: I had the TTL for my domain set to a week. Normal service may not resume for a while...