June 2003

« May 2003 | Main Index | Archives | July 2003 »

Joel Spolsky, on June 20:

Too many software developers just can't bring themselves to implement completely invisible features. They need to show off about what a great feature they just implemented, even at the cost of confusing people. Really great UI design disappears.

I think there's something in that for all of us, and perhaps not just in the field of UI design.

Bill de hÓra on Namespaces in RSS

Such tags are called children for a reason, they belong with their parents. It's this kind of noddy example that perpetuates the myth that namespaces are somehow neccessary for XML in the same way <getStockQuote /> perpetuates the myth that RPC is somehow neccessary for SOAP. They're not.

...

Using namespaces, it seems you can dodge name clashes. I'm saying, when you're ready, you won't have to.

In a programming language, you don't need namespaces when all the libraries you use are from the same vendor. When you're writing a Java application for internal use, it doesn't matter that you ignore the domain-name based package naming scheme: just so long as you don't release your code into the public, you'll never see a clash.

The problem that Namespaces were introduced into RSS to solve was: lots of different people want to add their own custom extensions to RSS. Since RSS is such a small domain, the chances of two people coming up with identical tags with different semantics is actually quite high. It's unlikely that you'll have two completely different definitions of "banking". It's very likely you'll have two different <lastUpdated> tags, one taking an RFC-822 date, and the other taking the number of seconds since the epoch.

There are two ways to solve this. One is that you have some central authority with whom to register extensions, who ensures that everyone plays nicely with each other. For RSS, that central authority would have to be Dave Winer. This kind of centralised authority didn't sit well with the RSS community, which wanted the ability to go off in different directions without permission.

The other is with namespaces. Namespaces are defined by URI, so their uniqueness is ensured in the same way as Java package names.

So yes, if you have full control over the schema, namespaces are unnecessary. If you have lots of people trying different things without deferring to a central authority, namespaces are necessary to stop them stepping on each others toes.

Of course, Dave Winer pretty much screwed the pooch with RSS2.0 and Namespaces, because he didn't really understand the ramifications. He felt pushed to allow namespaces in RSS, but really didn't get them. As such, he ceded all control over what could, or could not be in a valid RSS feed: even to the point where according to the spec, it's perfectly valid to replace an optional element (say, <pubDate>) from the core RSS with a namespaced tag from some other specification (say, <dc:date>).

This pretty much killed RSS as a standard, because almost all of it was optional, and you could add anything you wanted to it, and it would still have perfectly valid RSS. Hence the push to build something else from the ground up.

Mid-2002, Apple release an update to Mail.app that contains a sophisticated, and particularly effective (although with one annoying behaviour) spam-filter. The filter accurately bins 90% of my spam (in the order of a hundred a day), with only incredibly rare false-positives. The Panther version of Mail.app promises to include additional support to integrate with ISPs that run server-side spam-catching tools such as Spam Assassin or Brightmail.

Mid-2003, Bill Gates makes a grand announcement that it's time to deal with spam. Well, soon. When they get around to it. (After all, we haven't got that Trustworthy Computing thing they promised last year yet)

Wow, I'm really quite enjoying being one of those smug Mac users.

Apple have made the WebKit SDK for embedding Safari's rendering engine available on Apple Developers Connection. (You have to log in, and go to the downloads page. The new version of QTJava for JDK1.4.1 is there too). According to the blurb on the page:

The Web Kit provides a set of classes to display web content in windows, and implements browser features such as following links when clicked by user, managing a back-forward list, and managing a history of pages recently visited. Documentation is included.

I wrote the world's simplest web browser: a window, a text-box for the URL, and JavaBlogs showing in the display area.

Cocoa hacking reminds me just how much I dislike Swing. It took me longer to post this blog entry than it did to write the above “My First WebKit App”. Like so many things Apple, it Just Works. Except... manual memory management. Eugh. I am so spoilt by garbage-collection.

A Cancer Council of Australia billboard advertisement parodies the Pulp Fiction poster, showing a (bad) Uma Thurman look-alike with an oxygen mask, and the headline “Chronic Disease Never Looked So Glamorous!”

(All Rights Reserved: Image not placed in the Creative Commons for obvious reasons)

I'm sure the Cancer Council of Australia think they're making a really incisive point here. But why, oh why, oh why did they pick Pulp Fiction as the movie to parody? I suppose it was just because it's a well-recognised poster, but it rather kills the message.

Let's see. Aside from smoking, Uma Thurman's Pulp Fiction character is a cocaine addict. She also snorts heroin (thinking it's cocaine), overdoses, and ends up having to have a very large needle plunged into her heart by John Travolta. Most of the other characters in the movie go around gratuitously killing each other, being raped by Deliverance-types in a pawn-shop basement, or both.

But, of course, it's the smoking that's unhealthy.

Going home...

  • 7:25 PM

Coming over the bridge, the car lights blur together...

Mark Pilgrim links to a python-list post; Python Considered Harmful.

Debugging is a mess. The problem is that I tend to "stub" things a lot, or reference functions that have not yet been written (they're in the design doc, okay, so I know what their interfaces will be, I just haven't written them yet!). With a compiled language I run the compiler and linker and it tells me "hey stupid, you're missing something". With Python, I run it, and it tells me "doh, you forgot to create a method for 'checksum_packet'. I run it again, it tells me 'doh, you forgot to create a method for 'register_connection'. I run it again.... ad nauseum.

Reading through the replies, it astonished me that nobody in the supposedly enlightened Python community was suggesting pervasive unit testing as a substitute for compile-time type checking, as Bruce Eckel did so eloquently not long ago. (Mark also linked to the Eckel article directly below the link to the post).

After reading the whole thread, I finally looked up at the date at the top: the post was made in 1999. Ah, of course. Back in 1999, the sort of pervasive unit testing required to properly substitute for strong typing was still one of those arcane secrets slowly leaking out of that strange Smalltalk community. Nowadays, xUnit has been ported to every language under the sun (and even several that don't see the light of day), and most conscientious coders feel guilty if they write anything that doesn't have a good suite of tests. Certainly you wouldn't get away with a mailing-list post like that without ten people quoting Kent Beck at you.

Then, of course, I realised just how recent 1999 was. I started thinking “Do things change that fast?” Then I realised, no they don't. Things still changed slowly, but if you only see the back end of the exponential adoption curve, it looks a lot faster.

I've been withholding judgement for a while, but ultimately, I think Google's ranking backlash against weblogs is a mistake.

In January 2002, I started a Radio Weblog, which quickly made me the number one “Charles Miller” on Google. No, this isn't going to be one of those whiny “My ranking has dropped!” rants. I don't place any personal value in that ranking (although I find it amusing), I'm just using it as a base-line.

In October 2002, I moved my blogging activity from Radio, to The Fishbowl, in which this entry is being written. At the time, Google was very responsive: as people updated their links to point to my new site, Google realised that I had moved and within a few months, my new site had supplanted the old one in the results page.

In their re-indexing last month, Google once again moved The Fishbowl back down below my old Radio blog. Then, in the most recent re-index, the top two ranked search results for “Charles Miller” are my Radio weblog. The third is my livejournal, on which I mirror my less nerdy posts, for my less nerdy friends. The Fishbowl itself has dropped to number seven.

By all possible metrics of page importance, this ranking order is wrong. The Fishbowl is updated more frequently, linked to more often, and most publicly identified with me. The only possible for my current site being ranked so much lower than my subsidiary sites is that its ranking has been artificially marked down for being an active weblog. And given that the sites that have replaced it are both weblogs, it looks like:

  1. The degree of rank-poisoning depends on some measure of how ‘alive’ the weblog is, probably based on who is linking to you, and...
  2. The proportions of how far down the rankings you are dropped for being a weblog lead to ‘wrong’ results, in that they prefer obselete information to live information.

I think this is Google's first real mistake. It's not a big one, because it only really troubles webloggers, and people interested in finding ideas within the weblog community. But it's still a mistake, and it reflects a shift in the way Google works, from trying to work with the web, to trying to fight against it. They are responding to a particular criticism by deliberately returning results that are demonstrably skewed towards stale information.

Once again, I must note that I don't care what my absolute ranking is in Google, I'm reacting to the relative rankings of my three pages, where the page that Google used to correctly recognise as my primary site has been dropped below my two lesser sites: one that is dead and obselete, and the other that is almost never referred to outside the Livejournal microcosm.

I haven't really noticed any great improvement in the accuracy of general Google queries since they started pushing weblogs down the page. I have, however, noticed that it makes it a lot harder for me to find information I know is there, but that originated on a weblog. This, of course is the rub.

Weblogs are often unfairly tarred as lacking original content. This is an exaggeration. While sites like Daypop show that there is a definite herd mentality to linking, there is also a lot of original content being put up on a daily basis.

The programming community is a good example of this. As first Open-Source coders, then programming luminaries and even corporate hackers move to weblogs as a primary means of communication of ideas, marking down weblogs in indexes for searches seems almost comical. Why should Martin Fowler's writing be worth less if it happens to be posted on his blog?

When it was formulated, PageRank was the best way to use the web's linking patterns to answer the question: “what is the definitive resource for this web search?” This was revolutionary: Google made use of the shape of the web to return the best results. The linking patterns of weblogs did not subvert or corrupt PageRank, as they have been accused of, they simply altered the web in such a way that PageRank became less relevant. PageRank no longer fit the web as well, and it stopped answering that important question. Rather than taking the negative approach, working against the new shape of the web, patching the problem by marking down a certain class of page, Google needs to find a new positive approach, to adapt to the new shape of the WWW and work with that to once again find the answers.

From Ward's Wiki:

Q: Why can't we use NickNames instead?

A: In general, it is observed that people who use online nicknames care less about what they write. The discussion is usually taken more seriously when people do not use NickNames, but use their real name

As a courtesy, I'd like to ask anyone commenting on this site to leave their real name with the comment. That is, unless you're someone I've known online for so long under an alias that I wouldn't recognise your real name, or if you honestly (and publicly) identify with your adopted name more than your given name.

I reserve the right to treat anonymous comments with contempt. By its nature, a weblog is a form of personal conversation: I can't help but disclose my own identity with every word I write, so I ask that you have the courage of your convictions and own up to your words as well.

Thank-you very much. Have a nice day.

(Further discussion almost a year later: On Comments)

Penetration Testing is a security practice during which some trusted party attempts to detect and exploit weaknesses in a system's security. It is possibly one of the more fun aspects of security work, as it is the closest a legitimate ‘white-hat’ hacker can get to the sort of fun the black-hats get up to. (With the additional benefit that so long as you don't do anything stupid like bringing down a production system, the worst thing that'll happen if you get caught is having to write “intrusion detection systems seemed adequate” in the final report).

Penetration tests are also very easy to get completely wrong.

The simplest form of penetration test (and the first step in any) is the vanilla vulnerability scan. Using a tool like Nessus, you can automatically scan a host for the presence of thousands of different known vulnerabilities, and get a nice formatted report of the results. Vulnerability scanners are thorough, and very effective. It is, however, a good idea to have the results evaluated by someone who is well-versed in security practices, to assess the relative risks of each discovered flaw in the context of the network, and beyond the “High, Medium or Low” rankings supplied by the tool1.

Vulnerability scanning finds networked systems that are mis-configured or insufficiently patched. Beyond vulnerability scanning, there are degrees of active exploitation of vulnerabilities that range from a simple extension of the network-only attacks, to a full-blown Tiger Team authorised to attempt anything from social engineering to an attempted physical break-in.

The extent to which you carry out penetration-testing depends entirely on the risk profile of the assets you are attempting to protect. If you only feel it necessary to determine your protection from random Internet crackers, and Nimda/Sapphire-style worms, the automated tests should be sufficient (worms particularly rarely exploit vulnerabilities less than six months old, that can be picked up by any scanner). If your concerns are industrial espionage, veangeful ex-employees or curious government agencies, you may go significantly further.

The biggest mistake made with penetration tests, however, is to misinterpret the results.

Security is a continuum, not an absolute. For every asset you want to protect, you have to determine how much it is worth protecting, and who you would be protecting it against. For example, the public webservers of the Coca-Cola Corporation would be a prime target for wannabe website defacers, anti-capitalist protesters and so on: people with few resources to mount a sophisticated attack. And even if the site were defaced, it would only really cost the company a few days embarrassment and some time rebuilding the machine.

On the other hand, the “secret formula” for Coke itself2 would be a different proposition: the potential loss would be enormous, and one would imagine that if anyone was after it, it would be a significant act of industrial espionage, with sufficiently more resources behind it, and thus a greater range of things that the company would need to protect itself against.

So anyway, when commissioning penetration tests, you should have a very clear understanding of what particular kind of threat you are looking to protect your network from, based on the risk profile of the assets subject to the test. If the test is not matched to the level of defense, then it is worthless. Of course if your network is only designed to be proof against disinterested crackers, a military-grade tiger team will have no trouble breaking in. That proves nothing. On the other hand, neither does knowing that the secret formula for Coke can't be recovered by the equivalent of a random teenager looking for DDOS drones.

Secondly, if your penetration test is matched to the level of protection you expect from your network, then the correct result of the test is that no vulnerability should be found. This is the only acceptable result from a properly calibrated test.

Thirdly, and this is the most important point. If the penetration finds some vulnerability in your infrastructure, the correct response is not to patch just that vulnerability, and then count yourself lucky that you checked for it. While patching the discovered flaw is the first thing you should do, it is by no means the end.

A successful penetration indicates something more than a particular security flaw. It indicates some systemic flaw in network security policies or practices. The network was designed to be proof against a certain class of attacks, and it was found not to be. Why wasn't the installed software up to date against security patches? Why weren't the operators sufficiently educated to spot the social engineering attack? Why didn't anybody notice when the server started behaving out of the ordinary?

A decent systems administrator can secure a server—shutting down unnecessary services, updating necessary ones to the latest security patches, setting up suitable firewall rules—just as quickly and effectively as a vulnerability scanner can check it for known holes. A properly secured server is also far more likely to be safe from future, unknown attacks than one that is reactivley patched when problems are found. The question, therefore, is why wasn't this done?

A penetration test is useless in the absence of a well-implemented security policy. For every vulnerability found and fixed, you must assume ten more will be uncovered tomorrow, and will remain open until next time you perform the tests. The job of penetration testing is as an auditing tool: a validation that existing practices and procedures are sufficient to protect the network. If the penetration is successful, it is to those practices and procedures that management should return, to examine how they could be better implemented, or more clearly communicated to employees. Not to fix the problems with the last test, but to ensure that the next test comes up empty.

1 The author is a network security consultant, so there may be some self-interest in that statement.
2 I actually have no idea if the formula for Coke is a secret or not. It's just an example.

In my previous post, I accused Rickard of making comparisons between JBoss and his own framework that he did not make. I apologise for that misinterpretation. While I maintain that the form of his current criticism of JBoss at this time is unhelpful, I guess I was right to say that the moment someone uses the word FUD, you should ignore them. :)

This was originally going to be a comment to Rickard's latest article about JBoss's AOP implementation. Unfortunately, Freeroller died just as I was trying to post it, so I'll just reproduce the comment here, for posterity.

The classic definition of FUD, as pioneered by IBM back when they owned the world, was this:

Your competitor puts out a product. In order to stifle its adoption, you release a series of vapourware announcements that describe how your product, which is available 'real soon now', has more features, performs better, and if people will just wait until it's released, they'll be really glad they didn't lock themselves in to the competitor's platform.

When your product is essentially vapour, you have all the advantages. Maybe, compared to your AOP framework, JBoss sucks. But to the world at large, your AOP framework does not exist. Any judgements on the relative merits must wait until both platforms are available to compete on level terms.

JBoss, on the other hand, have the significant disadvantage that they are doing all their development, and thus their learning, in the open. This is a disaster for people who are heavily into performance, because by necessity, optimisation doesn't occur until after the thing is working, and as soon as the thing is working, the developers want people to start hacking on it.

It's one of the problems with the Bazaar model. Software is released deliberately before it is "ready", and often that means that programs can be tarred with the bad performance brush long after they've optimised away the problems that made them notorious.

(I really hesitated to use the expression “FUD”, because it has been so tarred by the Slashdork crowd to mean “anyone saying anything we disagree with”. The term has been so damaged in recent years that it's almost become one of those ‘Godwin's Law’-style expressions that will make me immediately discount someone's opinion. I promise not to use it again.)

Quick Link

  • 7:15 PM

I just closed one bug... and then as a result of closing it, opened six new bugs. (Well, five real bugs, and one umbrella-bug to link them all to). Ever feel like you're digging a hole, and eventually it'll be too deep to climb out of?

Hani: the BileBlog

It's a blog, it doesn't need to be well written, logical, or even coherent. I'm going to bitch and moan about everyone and everything that annoys me (and it's a huge long list). I just hope I don't get bored within a few days of ranting and raving.

Mike Cannon-Brookes

Let me start by saying I know Hani better than most people reading his blog. To me, I know him as a very smart developer who has an amazing ability to ‘not take crap’ from the tools he uses.

A colleague of mine reviews DVDs. In the course of random office conversation, he mentioned that it's much harder to write a positive review than a negative one. When you don't like something, it's very easy to write paragraph after paragraph on what's wrong with it. It's much harder to write praise that doesn't come off sounding fake.

Negativity has all the advantages. It's easier to write, and it's often a lot more entertaining to read. There's a reason why “X Sucks” posts quickly rise to the top of the JavaBlogs rankings. If your aim is to put as little effort into your writing as possible, as Hani has clearly stated, producing unremittingly negative rants is the easiest way to go about it. Constructive criticism takes more effort. Coming up with your own solution or work-around is even harder.

This is why the BileBlog quickly bored me. Most of the criticisms are, indeed, accurate. Some are trivial, some are not. But each individual criticism is only dealt with in the most superficial of possible ways. Root causes aren't investigated. Improvements aren't suggested beyond “don't do things that suck”.

Yes, there's an almost inexhaustable amount of crap out there. Sturgeon's Law is proven again and again. As such, there's an endless supply of topics to rant about. Ultimately, though, it's dull. Which is fair enough. The blog is behaving entirely as advertised, and one can't fault it for that.

What made the Antipatterns book good wasn't the fact that it was full of stupid things that software projects do, it was the strict pattern form that required each Antipattern be matched with a clear description of why they happen, and a refactored solution. If the book had just been a list of stupid things, it would have been amusing, but not useful. As an examination of why we do stupid things, how to avoid them, and how to rescue ourselves when we find out we've gone down the wrong path, the book actually became a real resource.

Jeremy Zawodny: Lame Programmers and Credit-Card Numbers:

Some programmers are so lame that they haven't figured out

how to strip spaces and dashes from input. Really.

I once had the "pleasure" of writing an interface to a credit-card gateway. In order to be allowed to hook up to the live system, the interface had to pass a series of tests.

One of the tests: the interface was _required_ to reject card numbers containing spaces, hyphens, indeed anything but numbers.

I phoned support to question this requirement, and was flatly told that there would be no negotiation entered into. Their reasoning was that they had some kind of fiduciary responsibility to make sure that whatever got typed in as the card number got sent verbatim to the bank. So I shrugged, and did what I was told.

This is stupid, of course. Most people read and write credit-card numbers as, say, four groups of four digits (or whatever the groupings are for Amex, JCB or Diners, I don't have examples handy). Forcing people to mash numbers together without grouping them makes it more likely that they'll enter the wrong number, not less.

One day last weekend, I had nothing to do. I was visiting my old Radio weblog, and started wondering how many others had jumped ship like myself. 98 lines of Ruby later, I had a set of totally useless statistics...

Read the rest of this entry…

I wrote this a few years ago, but I thought I'd give it an airing here, especially since Jini has been getting a lot of publicity in the last JavaOne. When Jini first came out, there was a lot of promotional material of the form “John goes into his hotel room, and all his devices configure themselves for the location!” and so on. This was my take on that material...

Read the rest of this entry…

From whytheluckystiff on Advogato, we find The Little Coder's Predicament:

In the 1980s, you could look up from your Commodore 64, hours after purchasing it, with a glossy feeling of empowerment, achieved by the pattern of notes spewing from the speaker grille in an endless loop. You were part of the movement to help machines sing! You were a programmer! The Atari 800 people had BASIC. They know what I'm talking about. And the TI-994A guys don't need to say a word, because the TI could say it for them!

This does make me wonder.

When I got my first computer (a Commodore 64), home computers were essentially run in one of two modes. In one mode, they were games machines, and occasionally ran really primitive productivity applications1. In the other mode, they accepted BASIC programs. Because games were quite expensive, and it was hard to convince our parents that we really needed them, a lot of us discovered programming in those gaps where we wanted a break from playing Attack of the Mutant Camels

The languages that an interested soul could download and learn today are so much more advanced than we had back then. A child of the noughties could download Python, Java, Squeak or Ruby for any platform they desire, and with their ubiquitous Internet connections, could write much more interesting first programs than the one my brother wrote back in 1984 to calculate his pocket money. Now, though, programming is no longer a ubiquitous part of the computing environment. It isn't the default mode the computer starts up in any more. Its an option you must seek out.

Maybe this is just a sign my generation has finally left ‘youth’. “Those kids of today just don't understand...” and all that. I'm sure the generation before me bemoaned the fact that I didn't have to solder my computer together from parts, or write my own bootstrap code before it did anything.

After all, from my own point of view while programming was always around in my childhood, almost all of it was Basic (which is often considered a disease that can't be recovered from2), and most of it was done by my brother. He programmed, I looked over his shoulder until he got annoyed with me. I much preferred playing games. I didn't write a serious line of code on my own until I was nineteen; and that was in Perl, on the Linux installation I had installed because I'd discovered the Internet and wanted to know more about that Unix thing.

And now, a few years later, I'm unmistakably a computer nerd. (and my brother who did all the coding is now a journalist, playwright, and a hell of a lot cooler than me).

So, perhaps the Little Coder's Predicament isn't as bad as an old-school hacker might think3.

1 Some people had those boring IBM PC things that ran spreadsheets but didn't have a joystick port. We pitied them until about 1990, when they started getting all the good games, and our last hope (the Amiga) was falling behind. My household's first PC arrived solely because my brother wanted to play Ultima Underworld.
2 Edsger Dijkstra: “It is practically impossible to teach good programming style to students that have had prior exposure to BASIC: as potential programmers they are mentally mutilated beyond hope of regeneration.”
3 On the other hand, I may be being a little presumptive. Maybe old-school hackers point at me and say “Oh God. This guy is the future of programming? Barf.”.

Found this on aussielj, taken from The Age newspaper.

Fish porn casts sexy lure
June 13 2003
By Emma Pearson
London

Fish can be turned on by an aquatic equivalent of pornography, according to research revealed yesterday.

Swiss scientists have discovered that male sticklebacks ejaculate more sperm if first stimulated by a "soft porn" film featuring "virtual" flirting fish.

more...

It's pretty much impossible to walk from the office to the shops where I buy lunch without going past, and being accosted by at least one person soliciting for a charity. I find this sort of thing really intrusive. I understand that it's probably a very effective way to get people to sign up (they generally ask for ongoing donations, they're not just holding up a bucket), but it's still an invasion of my personal space, and a disruption of my train of thought.

What's worst is when I get accosted by one of the charities I am already a regular donator to. It makes me want to ask them to take my name off their list so I can give my money to someone less annoying.

I would love to, next time I am cornered, just say “Why? The world's overpopulated as it is”, or ”Why should I care about Pandas? They can't even screw to save their own species.”

I wouldn't, of course. But the thought is enough to get me to my lunch without growling at some poor volunteer who is doing a really worthy, but incredibly annoying job.

mpt linked to Dave Nichols' article: I'd like to complain about this software, which contains the following wisdom about accepting bug reports:

The net effect of the lack of easy feedback channels is that the average user feels a sense of frustration and powerlessness. They get really irritated by their software, and no-one is listening. At least when you enter a Bugzilla bug you feel if(sic) you have done something constructive. Maybe, just maybe, someone sometime in the future will experience a better interaction thanks to your report.

This rings true with me. I like having the option of giving a software producer my opinion on what I want their software to do, and how. A while back, I reported a bug to Apple, asking for a way to synchronise Safari bookmarks over iSync. In the last Apple update, that feature was included. That made me feel pretty good about having submitted the bug, even if my submission had absolutely no impact on the schedule whatsoever.

On the other hand; looking at Bugzilla or any public bug-reporting forum for a popular product (my favourite used to be the bug reporting site for IBM's Visual Age for Java) reveals a down-side to having a public bug-reporting mechanism:

  • The priorities of the people who make the software (let's call them developers) will never match the priorities of any individual bug reporter (users)
  • With sufficient bug reporters, certain bugs that are not on the developers' radar, for various reasons, will be nominated by a significant mass of users
  • Bugs that are on both the users' and developers' radar will be fixed promptly, and have little attention paid to them
  • Thus, certain low-priority bugs will end up being the ones that attract the most user comments, and the most user votes

This leads to a dilemma. If the software writers do not have the courage of their convictions, they will waste time fixing bugs that should be low priority, but that attract the attention of a vocal minority: (Linux or Mac users, for example1). If the writers stick to their guns, they will be feeding a public resentment among users that will spill over into newsgroups, user groups, Slashdot, you name it. The “This is the most voted-on, commented-on bug, why isn't it fixed!” syndrome comes into play.

So, with that rather lengthy preamble out of the way, here is my list of pre-requisites for the public bug-reporting system that doesn't suck as much.

  1. Allow any user of the product to submit a bug.
  2. Every bug is given a priority by the people actually running the project. This priority is communicated back to the user as soon as possible, so they know they have at least been heard. Feel free to wrap these priorities in legal weasel-words. If it is an open-source project, mention that (aside from the last case) the user may implement it themselves instead. It is important to be honest, and prompt. People like to know that their bug-report has at least been read and evaluated by someone.
    • We consider this a high priority and will attempt to implement it in the very near future.
    • We consider this a high priority, but due to the size of the task, it may have to wait a few releases.
    • We do not consider this a high priority because [foo]. We want to implement it, but don't hold your breath.
    • We do not wish to implement this, ever, because [foo]. We apologise for not meeting your particular needs, and suggest you [use another product that more suits your need / fork the codebase].
  3. Do not allow public comments or votes on issues. If you can trust your developers to stay on topic, then technical comments (i.e. not about the prioritisation of bugs) are still a good idea, but discussions on the merit of particular issues are pretty much worthless.
  4. Allow comments on the merits of an issue, but swallow them. If the commenter is new to the issue, treat it like a new bug report, and give them the same feedback. If it's a repeat, inform the user that nothing has changed since they last showed interest.
  5. Allow public searching of the issue database, and the current status of each issue. This at least allows people to point to known problems.
  6. Inform bug reporters when their bug is fixed. It's one of those personal touches that makes people feel happy for having taken their part.

The idea here is to give users empowerment equal to their station. I know this could be a controversial point, but ultimately the development of an application is the responsibility of the people who are actually developing it. This is true whether it is closed- or open-source. If an application doesn't listen to the needs of its users, that application will fail. What are the needs of the users, however, are up to the developers to determine from all the evidence, and a bug-tracking database is a very bad measure of what is important across the whole range of users.

1 The author of this article is typing it on his kick-ass flat-screen iMac, and will post it through the Linux box that is providing his net connection, email, and so on. So bugger off.

I can't remember how old I was at the time. Twelve or thirteen, probably. Mum, Nick and I were living in our nice little house in Wembley Downs, Western Australia. It must have been some time around then because this was before we had cats.

For about a week we had been noticing a pretty bad smell in the kitchen. A sort of rotting-animal smell. As we leaned over to look behind the oven, the smell got stronger.

The house had a pretty regular history of resident rats and mice. Never an infestation, just the occasional scuttling sounds in the ceiling that led my mother to put poison in the roof. The poison was of the sort that made its victim die of thirst, so we could usually tell when one had taken the bait: it would end up floating in the swimming pool. After a while, the poison stopped catching anything. My brother and I surmised that natural selection had run its course, and we had unwittingly bred an uber-rat that would eventually take over the world.

My mother was pretty sure there was a rat hole behind the oven, and pretty sure that a dead rat was down there doing the natural, but unfortunately rather smelly business of decaying. Moving the oven to clean up the corpse would be an expensive affair involving the Gas company, and a lot of hassle. There were a few weeks of procrastination during which the smell got worse, and Mum decided that she would have it all taken care of while Nick and I were over in Sydney visiting our father.

As such, we weren't around to witness the truth. You see, in order to lean over to look behind the oven, one was inevitably leaning over the toaster.

A toaster containing a dead rat.

A toaster that we had continued to use regularly for at least a month of that rat's residence.

For the next year, my mother made toast in the grill.

Whenever I visit the Atlassian site, and see their slogan: “Legendary Service”, my brain automatically interprets ‘legendary’ as: “Lots of people believe it exists, but there's no evidence.”

(N.B. In my admittedly limited dealings with Atlassian I've found the service is, in fact, pretty good.)

This one hit me at work. I was modifying the facade of the subsystem I'm working on. I needed to go from: public RegistrationId register(User user, String param1, String param2) to public RegistrationId register(User user, String param1, Long param2). I wanted to do this without breaking anyone else's code, so I deprecated the first method, wrote the second, and checked it in.

I figured that was a pretty safe bit of interface evolution. I was wrong. Half an hour later, I was informed I'd broken the build.

The problem was: ‘null’ was a valid value for param2. Because null has an indeterminate type, the compiler couldn't tell which of the two selectors was being referred to by the calling code. New code could get around this by casting null to the appropriate type: (foo.register(myUser, "blah", (Long) null);), but existing code didn't have that cast.

Of course, If I'd been slightly less lazy in checking in the new interface, of if the system I was working on wasn't so freaking huge, I'd have caught it myself. C'est la vie. Still, it's something to look out for.

Update: Yes, I know this is my fault. There are extenuating circumstances, but to list them would involve going into more detail about what I'm doing at work than I am comfortable with. I just brought it up because it's an interesting gotcha for people dealing with (statically typed) published interfaces.

Splashscreen n. An excuse for not optimising your application's start-up time.

I have a rather painful history of having to throw out toasters for various reasons I'd rather not go into at breakfast. Yesterday, I finally overcame my toaster phobia and bought a new one. Lessons learned:

  • “Spreads easily, tastes like butter”: one of these two will be a lie.
  • Slightly burnt toast is sufficient to set off the smoke alarm in my new apartment.
  • This is an improvement over my last apartment, where the smoke alarm was hanging by a single wire from the ceiling, and wouldn't be set off by the whole building burning down.
  • Vegemite on toast is still the most Aussie breakfast there is.

#ifndef STD_SKINNING_RANT
#define STD_SKINNING_RANT

I consider skinnability to be a good reason to not use a program. Skinnability almost always means “complete lack of standard controls, and useability that as been viciously compromised just so that some 13 year old boy can more easily graft on a bunch of stuff he scanned from an H.R. Giger calendar.”

(see also)

#endif /* STD_SKINNING_RANT */

Weird-o IM.

  • 4:14 PM

I've never heard of this person before in my life, bit he/she/it IM'd me out of the blue. Transcript edited to elide irrelevant meandering:

Here til Sunrise
there's a website and all it says is "IM carlfishy."
Carlfishy
Where is this website?
Here til Sunrise
not sure, i don't remember, i just remember the phrase
Carlfishy
How utterly bizarre. Was it this one? http://www.pastiche.org/wiki/CharlesMiller
Here til Sunrise
No, definitely not. it was just a plain white background with bold pink writing. "IM carlfishy"

So, inquiring minds want to know. Does this mysterious page exist, or was some really bored individual looking for an excuse to strike up conversation with a random stranger? Google is no help, as the mysterious caller said that the direction to IM me was an image, rather than text.

That Internet thing sure is weird.

From this otherwise interesting C:Net article about the Sun v JBoss thing, comes the following choice quote:

Enforcing J2EE compliance is important, because IT buyers care about being able to move Java applications to different systems, said Ted Schadler, an analyst at Forrester Research. ...

False. Complete, and utter bullshit. The overwhelming majority of J2EE development is being done in bespoke systems, where the deployment platform is decided a long time before development even begins. Cross-deployment is never an issue. Cross-compatibility of developer skills is important, so you have a bigger pool of development talent to hire from, but developers are far easier to adapt to incompatibilities than software is.

Of course, Schadler backs up quite a way in the next sentence:

... But compliance is generally seen more as a buyer's "check-list item" as opposed to a technological necessity, he said.

"I think the portability question is more important on paper than it is in reality," Schadler said. But "the brand is worth something. If any Tom, Dick or Harry can say that they are J2EE-compliant, that's a problem."

The pundit is having a bet each way. The first line is the party-line, regurgitated from Sun's press-release (this is what ‘analysts’ from places like Gartner and Forrester do: consume press-releases and condense them into research papers). Then comes two sentences of back-pedaling, and then a sentence of back-pedaling from the original back-pedaling. An analytical double-backwards somersault, in the pike position.

The result: a confused, meaningless babble posing as informed commentary.

My take on all this? The J2EE brand is meaningless. Compliance has never been a major reason for choosing or rejecting an application server (trust me on this, I've worked with Websphere since v3.0). Sun have realised that the application server brands: Websphere, Weblogic, Oracle, JBoss, Orion, are all more significant than the J2EE brand itself, and they're desperately fighting for the mark's relevance.

I downloaded and installed Mozilla Firebird, the new direction for Mozilla browsing. Here were my impressions:

  1. Looks nice
  2. Seems quick
  3. Renders well
  4. Page-up and page-down often refuse to work
  5. Application freezes for up to a minute, randomly
  6. Dialogs (such as HTTP Basic Auth) can refuse to close, forcing you to quit the app

I'm suffering from horrendous Mozilla beta-fatigue. I've been supporting the browser, using it regularly since the single-digit milestone releases (Well, M9 anyway). It had a blissful year or two of useability and stability, and then... another radical change of direction lands us in beta-country again. I can't take it any more. Sure, I use the Safari beta on my iMac, but Safari's beta-ness expresses itself in missing functionality (most of which I would never use anyway), and rendering quirks. Not annoying freezes and dialog boxes that won't close.

I understand that the move away from the application suite was necessary, but this time I'm not playing. Wake me up when it's 1.0, I'll be sticking with the old stable branch, or browsing on my Mac until then.