March 2004

« February 2004 | Main Index | Archives | April 2004 »

30
Mar

It took me a little too long to embrace the Velocity web templating langauge. I place all the blame on "You Make the Decision", a piece of blatant propaganda that has since been tidied up, but when I first read it was a masterful exercise in comparing blatantly "worst-practices" JSP with carefully crafted Velocity replacements.

I finally fell head-first into a Velocity-based project in December, though, and haven't looked back since. For those who aren't Velocity-aware, it takes the refreshing approach amongst web templating languages of throwing away angle-brackets entirely, which leaves you with surprisingly readable templates as a result. I've thoroughly enjoyed using Velocity, and wouldn't hesitate to recommend it as a vastly less annoying alternative to JSP.

There are, however, three gotchas I discovered while working on Confluence.

  1. #set ($foo = $bar) does not always assign the value of $bar to the $foo variable. If $bar is null, no assignment will occur, Velocity will log a warning, and $foo will continue to have its original value.
  2. #parse("/includes/header.vm") will include the contents of header.vm in the current page with no apparent change of scope, but any macros defined in header.vm will not be available in the parent page due to Velocity's order of parsing and execution.
  3. If #foo is a macro, then you can include a literal '#foo' in your page by escaping it thus: '\#foo'. However, if #foo is not a macro, then the escaping is not un-escaped, and '\#foo' remains in its backslashed form after the page is processed. For bonus points, spot the interesting bug when you have global macro definitions turned on, and the page containing the '#foo' macro may or may not have been loaded before the page containing 'http://www.example.com/bar#foo'

Now to be fair, these are minor niggles, and two of the three are very clearly pointed out in the Velocity documentation as places in which the language does not behave the way you'd expect it to. Given that you've been warned, any mistakes you make as a result are your fault, right?

Well, no. I tend to agree with Matz that I'm ultimately happier with a language that is... simpatico. If you have to write a special note in your documentation saying "This doesn't do what you think it would do", it's probably a better use of time to change the code so it instead does what it obviously should have been doing all along.

The other day, as Mike was adding yet another obscure chord to his IDEA keymap to squeeze a few more seconds of freedom from repetition (I think he was mapping Ctrl-Meta-B to "stop the debugger, recompile everything, make a cup of tea and solve world hunger"), it occurred to us that the day is coming where they keyboard is no longer going to be necessary. What we're really going to need for our programming is one of these:

A Mortal Kombat-style arcade controller.

Master the combo moves. Write the perfect program. Prove your Code-fu is superior. Flawless Victory!

D L LP: Extract Method. L L L B+HK: Surround with Try/Catch. D R HP HP HP: Iterate Over Collection.

What is a Robot?

  • 10:29 AM

I was thinking this morning of an application that, amongst other things, would have to visit and parse web pages from links submitted by (or indirectly referenced by) the general public. Yes, I know, not a terribly original idea. This led me to wonder if I'd annoy anyone in the process of sending the program out to visit their sites.

There's this nifty thing called the Robot Exclusion Protocol that allows site administrators and authors to tell web robots not to visit their pages, or if they do visit, not to index them. What the standards seem to be missing is a clear definition of a robot. They were written very much with search-engine spiders in mind, and the existence of automated agents that perform some other function seems to have been forgotten. Reading the various documents, you know for sure that GoogleBot is a robot, you know for sure that you, sitting in front of Safari and clicking links are definitely not. Everything in between seems to be a grey area.

Or maybe I'm just thinking too hard, and making a problem where none exists. Is the general consensus that "Robot" as defined by the exclusion protocol means "spider", and not "Automated agent that doesn't spider"?

At work, I normally set up my Powerbook next to my workstation: I keep all my mail and stuff on the Powerbook, and use my Linux workstation for coding. Given that I've got a dual-monitor setup at home, I suppose it's quite natural that occasionally I will try to move the mouse pointer from one monitor to the other.

You know you've completely lost it, though, when after failing to get the pointer to cross between computers the first time, you slam the pointer harder against the side of the screen, with the unconscious belief that if you just push hard enough, it'll make it across the gap.

Another thing I used to have time for when I was at University was getting into long, involved arguments. Anyone who had the misfortune of sharing one of the SorceryNet IRC mailing-lists with me during the late 90's will probably remember one or two rather vicious ones. (Let me be clear here: this isn't going to be an apology. I was right, you were wrong. End of story.)

Now, though, I have much less time, and pointless arguments were one of the things that had to go. If I get in an online argument these days, I inevitably just end up annoyed that this thing is taking too long, and that the other party in the argument obviously has all the spare time I don't have any more.

So over the last few years I've come up with an informal set of rules for argument. I've never thought of them as such before today: they accreted over time as unconscious heuristics that I am now attempting to put into print. I'm still not perfect in following these rules, but when I do follow them, I end up happier and less frustrated with life than when I don't.

Rule one is scarily simple. You will never change anyone's mind on a matter of opinion. Someone going into an argument believing one thing, and coming out the other side not believing it is a freak occurrence ranking somewhere alongside virgin birth and victorious English sporting teams. People change their minds gradually, and if anything a prolonged argument only serves to back someone into a corner, huddling closer to the security blanket of what they believe.

Correcting a factual error is much easier, but never confuse correcting a factual error with changing the opinions that fact was being used to support. The opinion will survive in the absence of the fact, until a new fact is found to justify it. (See also, the many reasons for invading Iraq).

Seeing as arguing is largely pointless, one of the best things to do is to severely limit what you end up arguing about:

  1. Never seek out things to disagree with. There are too many of them out there, and correcting the ills of the world just isn't your job.
  2. If you come across something you disagree with while randomly browsing, let it pass without comment (see rule 1). If it's truly frustrating, write a reply, then delete it without sharing it with anyone else.
  3. Even in the limited scope remaining, it is not your job to correct everything you find that you disagree with. Try to limit yourself to things where the subject is at least something that makes some practical difference to your life.
  4. Do not argue about politics, religion, or matters of personal taste or comparative morality.
  5. DO NOT argue with Lisp programmers, believers in the Semantic Web, or furries.
  6. Saying something controversial in your own space (i.e. your weblog) is only arguing if you directly reference somebody you are disagreeing with (or it is clearly understood in subtext who you are disagreing with), and that person is likely to give a shit about what you said.
  7. If someone disagrees with something you've said, you're already in an argument. See below.

Once you find yourself in an argument, your job is now to make your point clearly, and then leave. You are allowed two passes:

  1. State your case
  2. Clarify any misunderstandings

Once you have stated your case, there's no point re-stating it. Going over the same ground repeatedly will damage your case: nobody likes reading the same interminable debate over and over again. Similarly, if people read what you have to say, understand it, but continue to disagree anyway, there's nothing more you can do unless you suddenly come up with a totally new argument. The only productive thing you can add is if people clearly don't understand what you're saying, and you need to clarify.

There's a trap here, though. Sometimes, understanding is experiential. For example, to understand religious belief you must at some level 'experience' God. Someone without this experience can understand the mechanics of belief, but never understand the belief itself. Besides religion, I also have precisely this problem with RDF: I get into long debates where people try to explain the damn thing to me when I already know the mechanics. I just haven't experienced that spark of enlightenment that has gone with it for the True Believers.

If you are in one of these arguments, you can clarify 'misunderstandings' until you're blue in the face, but someone who has experienced the belief will not ever be talking on the same wavelength as someone who hasn't.

After you've stated your case and made a single pass at clarifying any misunderstandings people may have about your case, that's it. Time to leave. Getting the last word is only important in a protracted argument: the longer the argument, the more valuable the last word becomes. Keep the argument short, and it barely matters.

Postscript: October 14, 2014.

And that's where I should have ended the article when I first wrote it, but I decided it was time to be “clever”. The remainder was meant as a satirical coda, a throwback to my days on Usenet where defeating someone in a flame-war was often more important than having either the moral or logical high ground. As this post spread from people who know me personally to a much wider audience, I got more and more worried that someone might take it as serious advice.

To put it bluntly, what follows is a step-by-step guide to Tone Trolling. It's a horrible, cynical way to argue, and one that is far too often used by people in positions of privilege (who have the luxury of being emotionally distanced from whatever they're arguing about) to silence people who are legitimately angry about something that affects them.

Don't do it.

Meanwhile, if you liked this, you might also like: Charles’ Rules of Online Forums

Original transmission resumes…

Sometimes, you'll ignore all these rules, and get into a month-long argument about RDF with a fundamentalist gun-nut emacs-user. What then?

The ideal attitude to project during any argument is one of calm disinterest.

Any emotional involvement you show is a weakness that can be exploited by your opponent. Even being passionate about your subject is dangerous, because over time passion becomes zeal, and zeal becomes shrillness. Affect the air of someone who is completely convinced of their correctness, but does not really care that the rest of the world is so stupid as to not realise it.

If you can get away with it, try for a mildly amused disinterest. It will infuriate your opponent, and if your opponent gets angry while you're remaining calm, that is a distinct advantage, especially when there is an audience involved. People who are sitting on the fence in a debate will naturally gravitate to the speaker who is perceived as being reasonable.

Other useful techniques are being nasty out-of-band, in the hope your opponent will bring that into the debate, or saying something inflammatory and then immediately retracting it: your opponent will run with whatever it was you said, while the audience discounts it due to the retraction. Both these techniques will make you enemies, but generally they'll only make enemies out of people who don't agree with you in the first place.

Amused disinterest also gives you a face-saving escape plan: if you were never emotionally invested in the argument, you can walk away from it without conceding defeat.

I found this scathing denunciation of wiki-markup via Mark Pilgrims' b-links. Seeing as I've spent the last few months writing a product that uses Wiki markup as its basis, I thought I might come to the markup's defense.

Thanks to a worldwide effort that could have built the Great Wall of China at least once over, there is a single system for text markup [HTML + XML] that is regular, full-featured, and mature.

Well, that's my first objection. HTML is only regular if you stick to a single browser (and avoid points of contention like the <q> tag), only mature if you ignore the fact that the various XHTML working-groups are busy uprooting large parts of the spec because they're considered dead-ends, and only full-featured if you define "full-featured" as "doing all those things that HTML allows you to do."

How do you write a magazine-style multi-column layout in HTML? You don't. Do you miss it much? No, not really, because the web really isn't that well-suited for that kind of markup. In a similar way as a wiki really isn't suited for most of the more complicated things you can do with HTML.1

Wiki markup solves a set of problems that are important for Wikis to solve:

  1. Pages are editable solely in the web browser, not requiring any additional software to be installed or called up for editing.
  2. The page in the text area gives visual clues as to what the finished page will look like: bullet-points look like bullet-points, and the various kinds of inline emphasis look like the sort of thing we've been using in email for years.
  3. The markup is simple enough that it can be described very quickly. The important parts of Confluence's markup can be described succintly in a side-bar on the edit page.
  4. The Wiki doesn't have to worry about defending itself against the latest Cross-Site Scripting technique or whatever markup crashes Internet Explorer today.

These are important. The point of a Wiki is to reduce the barrier between viewing a page and editing it. Wikis are about ease of contribution. The more obstacles you put between viewing and editing, even small ones like having to fire up an editor, the less likely people are to edit. You have to enable editing in the same application that is being used for browsing, and since people already grok and enjoy browsing their hypertext in a web browser, that's where we have to be.

Sure, you end up with something that's significantly less powerful than HTML. This is a feature. A wiki page isn't a place for complicated markup, it's for writing stuff down. The more power you put in the markup language, the more people are going to be wanking around with the precise arrangement of angle-brackets that will make their paragraphs step from left-to-right in pixel-perfect harmony... in lieu of saying something.

HTML is a language designed for machines. It sucks for people to read, and sucks for people to write. This page contains a good sample of the HTML source for some content side-by-side with the Textile-based Wiki markup. The latter is far easier to read, and believe me it was far easier to type.

The real answer, as far as I can tell, is that when dumb textareas are the only user interface a Wiki provides, then the expressiveness of full HTML is too much for the average user, or even the lazy advanced user. Somehow that doesn't lead me to believe that the answer is to twist the world to fit into a textarea. Look at SubEthaEdit. All over the world Mac weirdos are editing the same document simultaneously. The magic isn't found in using some broken punctuation-based markup. (And, no, the magic wasn't within you all along, kid.)

"The expressiveness of full HTML is too much..."? I would say that the verbosity is too much, the complexity is too much, and the way it obscures what you're actually writing in a mess of angle-brackets is too much. And GUI editors don't help. The only GUI HTML editors that are fully-featured enough to harness "the expressiveness of full HTML" are authoring applications firmly targeted towards professional web designers. The ones that are simple enough for Joe Word User to get into are either on the same level of expressiveness as wiki-markup, or they produce something that doesn't remotely resemble HTML, or both.

We've had dedicated, specialised networked collaboration software before, albeit not with SubEthaEdit's immediacy. Interestingly enough, it's exactly that sort of software the people are moving to wikis from.

Against a collaboration software that combines the expressiveness of HTML, simple, intuitive WYSIWYG, the seamless networking of SubEthaEdit, all wrapped in a package that people actually want to use regularly, we have wikis. Wikis have the advantage of... well... existing. They also take advantage of the fact that most people already have a web browser1, know how to use it, and with a minute of pointing can be shown enough markup to make a meaningful contribution to a document.

1 And how about footnotes? Or image captions? Or a combo-box control? Or all the other things missing from HTML that have to be retrofitted through obscure CSS hackery, if they can be retrofitted at all. Full-featured my arse. Reasonably full-featured, I'll accept. For its domain. Mostly.
2 Of course, if you want to install additional software to get a better editing/navigation experience, you can.

I've started receiving emails that look something like this:

Dear user of Pastiche.org e-mail server gateway,
Your e-mail account has been temporary disabled because of unauthorized access.
Pay attention on attached file.
Have a good day,
The Pastiche.org team       http://www.pastiche.org

The attachment, obviously enough, is yet another email-borne worm. However, seeing as I am the Pastiche.org team (and have been since 1997), these mails always give me a bit of a double-take.

Once, when I was more active on Usenet, somebody complained to my postmaster about me: I had apparently committed some egregious crime against humanity by flaming him, and my account deserved to be pulled immediately. The postmaster@pastiche.org address is aliased to my regular inbox. I replied (from the postmaster address):

Thankyou for registering your complaint about this user. He is an habitual troublemaker, and in accordance with our zero-tolerance policy for Internet troublemakers we have had him shot. Have a nice day.

Buying a Sofa

  • 2:18 PM

I don't know... it's just... When you buy furniture you tell yourself: "That's it. That's the last sofa I'm gonna need. Whatever else happens, I've got that sofa problem handled." -- The Narrator, Fight Club.


I've never actually owned a sofa before. My first apartment came with furniture, including an old, worn, fold-out sofa-bed that was incredibly uncomfortable to sit on, and threatened to swallow you into the gap between its cushions at any moment. For my second apartment, I borrowed a couch from my mother, which I had to return when I left Western Australia. My first apartment in Sydney was too small for a couch: the only comfortable vantage-point for watching TV was the bed.

I have the feeling that 28 is perhaps a little too old to be buying one's first couch.

Guy Steele, on ll1-discuss

And you're right: we were not out to win over the Lisp programmers; we were after the C++ programmers. We managed to drag a lot of them about halfway to Lisp. Aren't you happy?

Scott McKay in follow-up

I would be happier had the hype campaign not presented Java as the ultimate programming language, obviating any need for any more programming languages, ever. Yes, it dragged a lot of people halfway to Lisp, but it also tricked them into thinking that that's as far as they need ever look.

Jeremy Hylton, closing the thread

If the hype campaign had said "Hey! We've got this language that's about halfway between where you are and a really good language." it probably would not have been effective.

(Found via Planet Lisp)

I remember on a previous post in which I compared a simple loop in eight-or-so languages, Alan Green responded "The only "fair" comparison here is with C++. Java was designed to be a better C++, and it is." This is ironic, since Alan was almost immediately moved to a C++ project, and seems to have been enjoying it quite thoroughly.

I was honestly planning to make some kind of point here, but I seem to have lost it on the way.

Discontinuation

  • 10:00 PM

One of the reasons I like blogging is that writing about a subject forces me to consider it in more depth than I would if I were just mulling over it in my head.

For example, I sat down tonight to write a sanity-check about continuation-based web programming, and in the process of gathering my thoughts on why I'm wary of the idea, I've managed to develop significantly more respect for it than I had before I started writing. As I examined each of my issues with the concept in enough detail to write about them, I discovered they weren't really problems at all.

Except one, at least with the two frameworks I looked at.

In the oft-cited Smalltalk continuation framework Seaside and it's Ruby port Borges, all links are anonymous callbacks, and thus are meaningless, transient strings bunged on the end of the application's base URL. The developer of Seaside has this to say about it:

People often complain about having "meaningless" numbers in the URLs, but they enable a level of abstraction over HTTP that wouldn't otherwise be possible.

To me, that's a deal-breaker. The URL is a Uniform Resource Locator. Making URLs no longer point to resources takes an enormous amount of the web's power away in order to make the programmer's life easier.

As a random example, take web stores. I can copy an Amazon URL from my location bar, paste it to a friend in e-mail or an IM, and that friend will see exactly the same product that I'm looking at1. Whenever I go to a site, find something I want someone else to look at, paste them the URL and discover they've encountered a "This is not your session" message, I make a strong mental note to avoid that site in the future.

Getting rid of bookmarking and deep-linking essentially lobotomizes the web, and to me makes such frameworks useless for most public-facing web applications2.

Even for internal systems: ever implement a login system for a web application? There's a reason that the post-login redirect has to take the user to the deep-linked page they were trying to reach originally, rather than the application's front page: users demand it.

1 If you can do this with Seaside, it would essentially be session hijacking. Even more nastily, the continuation-based approach would mean someone who intercepted a URL (or dredged it out of a recent browser history file) would be able to seamlessly resume some other user's session at an arbitrary point.
2 The most likely exceptions being completely private applications with a shallow interface such as web-mail or banking.

I spent most of Sunday afternoon doing technical support for my father (for which I was paid in beer). The task was to set up an 802.11g wireless network around his new cable connection so that the three different computers in the house could all talk to the outside world. Two of three Windows boxen were set up with relative ease -- plug in hardware, start up, insert CD, keep pressing OK, reboot, done. The third Windows box was somewhat more recalcitrant, but we eventually traced this back to a serious hardware problem, for which we really can't blame the Operating System.

A cow orker also spent the weekend setting up a wireless network, except he was doing it under Linux. I arrived to the office to hear stories of having to fool the kernel into accepting the precompiled driver module, thus avoiding the hassle of having to merge the module source into the kernel source tree and recompile everything. He was really happy about getting it done over the weekend, because he was afraid he'd end up wrestling with it for weeks.

Jamie Zawinski:

If you made a Venn diagram, there would be two non-overlapping circles, one of which was labeled, "Times when I am truly happy" and the other of which was labeled, "Times when I am logged in as root, holding a cable, or have the case open."

Once upon a time, back before I dropped out of university, I enjoyed all the mindless arsing about that was necessary to get a Linux box to do anything mildly useful. Linux's inconsistencies, the plethora of weird and wonderful configuration files, the ever-changing procession of desktop environments, all of this was a challenge. Something new to learn. I felt my horizons expanding.

Nowadays, the novelty has decidedly worn off. I can't just skip a lecture if I want to spend time configuring BIND. I don't find it very interesting any more to have to think too much about my computer. The time I spend thinking about my computer is time I could be spending thinking about the things I want to do with that computer. Wading through long instructions on how to get Postfix and SASL working together is not how I enjoy spending my afternoons any more.

I still run Linux at work because I can't program without the Unix tools around me, and every time I use Cygwin I feel the immense philosophical disjunct between the Unix tools and their Windows environment. But I think the above explains why at home last week, I turned my last Linux box off and now it sits unpowered next to its year-idle Win2k counterpart.

Google Evil?

  • 12:48 AM

I've always basically believed that Google were genuine about their "don't be evil" credo, despite occasional growing pains and stuff-ups. Now, however, I'm wondering if it's really a smokescreen for something far more sinister...

When their web API is precisely 666k in size, you know you're facing a truly subtle and insidious form of evil.

Hani has been wondering if good code is relevant?

Excluding the children amongst you, almost everyone has seen projects that have awful awful code succeed, along with projects using all the right buzzwords and cool frameworks fail. Out there in the real world, I'd be amazed if there's any correlation at all between success and anything that actually matters to a developer.

Reading this gives me one of those "Well... Duh!" moments. There are so many factors that go towards a project succeeding or failing that one of them -- any one of them -- will get lost in the statistical noise if it is singled out. We all know projects that have been based on good code and well-practiced methodology, but never seen the light of day because something else went wrong. And we all know projects that have done everything wrong, but have bludgeoned through with sheer bloody-mindedness and force of will.

The thing about writing good code is that while you might still fail with it, at least if you fail you'll know it wasn't the fault of the code1. As a programmer, you can't take responsibility for the relationship with- or demands of the customer, you can't take responsibility for changes in the economy or budget cut-backs, you can't take responsibility for the market-research that drives the project or the marketing that must sell it when you're done.

As a programmer, you're responsible for the code. If any of these other things get screwed up, your project may well be doomed. If any of these other things are done well, your project may succeed despite bad code. But at the end of the day, your job is to make the best of what you're being paid to do.

Much of Hani's wrath falls on the creation of web frameworks.

All these frameworks and web doodahs are more often than not simply the product of a hopelessly bored mind desperate to inject some sense of meaning into their daily grind. All the business asked them to do was product an app that solved a specific need. Nobody told them to go invent a framework for it, or to maximise reusability, or to componentise the moving bits, or to use TDD, or to opensource anything.

If I were to write a web application of any significant size using purely YAGNI methods: starting with a blank slate and only ever writing code that takes me towards concrete functional goals, it wouldn't be long before I started seeing that I was wasting a great deal of effort. I'd be copying and pasting a whole lot of code, and modifying the app to add new stuff would be increasingly difficult.

So I'd start refactoring the repetitive stuff into modules that I could reuse. This wouldn't be the result of a hopelessly bored mind, it'd be necessary for me to be able to work faster. I could do without it, and it probably wouldn't mean the difference between the success and failure of the project. Lots of projects have succeeded through cut-and-paste programming. But reducing the amount of code you have to write (or paste) for every function, you reduce the probability of bugs2, reduce the amount of time it takes to write the damn thing, and generally save somebody some money.

Do that enough, you'll end up with a framework.

Work on enough web application projects, and you'll get sick of working up to the framework from scratch. You know raw web applications are tedious exercises in repetition. You know you've solved this problem before. YAGNI gives way to IKIFNI (I Know I Need It)

I've recently started writing a web application in Ruby that will never see the light of day (it's purely a personal exercise to keep my brain ticking over), and the first thing I did, in accordance with IKIFNI, was to look around at the available web frameworks. Having spent the last three months working on a project based on Spring, WebWork2 and SiteMesh, the last thing I want to do is go back to plodding around doing the same things that these frameworks have helped me avoid having to do.

Unfortunately, I couldn't find one that didn't seem like the bastard child of Struts3. So I started building my own4. Because I know I'm going to need it.


1 Like all blanket statements, this isn't always true. In some death-march cases, you're best off writing really bad code really quickly, just to get something up and avert the cancellation of the project. But for the general case, it's a good rule of thumb.
2 As bad as "lines of code" are as a metric of productivity, they're still a remarkably good predictor of bugs.
3 Give a reasonably good OO developer the task of building a web framework, their first attempt will probably be something resembling Struts. Which I suppose gives good support to the "Build one to throw away" theory.
4 Which bears only superficial resemblance to any of the aforementioned, but I suppose pays homage to each.

mapper = Mapper.new()
mapper.add_mapping(".*\.ahtml", AmritaAction5, { Mapper::ALL => :render }, {})
mapper.add_mapping("/hello", HelloWorldAction,
  { Mapper::POST => :greet, Mapper::GET => :display_form },
  { :input => "/form.ahtml", :greet => "/out.ahtml" })
mapper.bind_to_server("/", webrick)

5 Amrita is a temporary measure. It's a clever idea, but one gets the feeling nobody's ever done anything serious with it, because its limitations became painfully apparent just from writing Hello World.

After three years or so of blogging, I have come up with one rule. This rule applies to me, it may not apply to you.

The longer I think of a 'bloggable' topic, the less likely it is to ever grace these pages. This, for example, explains why this site is host to so many "part one" articles that never see a part two.

To me, blogging hits that sweet spot where it is 90% inspiration and 10% perspiration. Fired by an idea that is filling my brain, I sit at the keyboard and type. Posts rarely see a second draft beyond simple spelling and grammar corrections. Words flow from my brain to the keyboard, out to the world, and then I'm done with them bar the comments.

I have a list. I've got a little list: an honest, formal list in VoodooPad of topics I plan to write about. Some of these topics have been on this list for more than a year. And I can't say any of them are any less interesting to me today than they were when I put them on the list.

The single attribute they share, however, is that I've allowed myself time to think about them. The inspiration has faded, but in thinking I've given myself even more ideas that I need to perspire over before the article is done.

And that's what kills them.

Found on Erik's link blog, CNBC and MSNBC both incorrectly reported Stewart verdict.

The culmination of a trial for a woman who built her homemaking empire in large part on television drew intense interest from TV networks. ABC, CBS and NBC broke into regular programming to report the verdicts.

With cameras not allowed in the courtroom, networks had to devise intricate plans to get the news out — involving scarves, placards, cell phones and quick feet.

Let's be realistic for a moment. What is the difference in elapsed time between:

  1. A reporter holding up a placard with the bare minimum summary of the verdict on it, and
  2. The reporter making notes about the verdict, carrying it to where the broadcast is happening, teaming with the producer to write up a clear summary and having the talking-head read it on air.

The latter holds the advantage in every single area but one: it is more informative, more accurate... and takes maybe ten or fifteen minutes longer.

The former isn't journalism, it's newstainment.

By any objective measure, the Martha Stewart verdict shouldn't have been something the world needed immediate, within-the-minute notification of. Stewart was a public figure found guilty of giving false evidence. Most of the interest in the case came from the contrast between the conviction and her clean homemaker image. What difference would fifteen minutes make to a public interested in finding her fate after a protracted trial? What difference would waiting for the evening bulletin make?

None, whatsoever. But you can make it matter to people if you feed the drama the right way. You can convince people that this is something that should matter to them, that they should be on the edge of their seats, demanding to know the outcome as soon as the judge hands down the verdict. If not sooner.

One of the best ways to entertain is to manufacture excitement, and one of the best ways to manufacture excitement is to manufacture a sense of urgency, whether there is one or not. Stress how we're waiting for the verdict. Create an artificial deadline. Create an atmosphere where you are rushing to bring the news as quickly as possible, and that urgency will be infectious for the audience.

Hey, the same tricks work on 24, and we know that none of the people in the show really exist. That guy isn't really the president, nobody's really trying to kill him, and Keifer Sutherland's not really saving the world. If we can get hooked by fiction, the same tricks can get us hooked on semi-fact.

And by making us excited and getting us hooked, eyeballs are delivered to advertisers. Newstainment.

It was the same thing during the most recent Iraq war. We were told over and over how important it was that we were getting constant, 24-hour news coverage. How important it was that we were getting information from people on the ground within seconds of it happening. How lucky we were that reporters were embedded with army units so we could have these first-hand accounts.

Journalism suffered, of course. The news we got was almost universally poorly fact-checked, poorly analysed and in the case of the embedded journalists, completely sacrificing any last vestiges of journalistic impartiality. But journalism isn't important any more. It's no longer good enough to live from the profits that good reporting can bring in: you have to maximise shareholder value.

One of the most disheartening things about writing software is the fact that no matter how well you think you've done, users will always find bugs, and they'll be annoyingly obvious in hindsight. You'll stand there banging your head against the wall wondering just how the Hell you didn't think of that edge-case?

Anyway, I was watching the qualifying today for the Melbourne F1 Grand Prix, and Mark Webber was just finishing the first sector of the lap. The best split time was 25.265s, and he exactly matched that time.

The split display on the screen showed his time as:

WebberDifferenceAlonso
26.265-26.26526.265

Suddenly I feel slightly better about my own edge-cases.

Defending YAGNI

  • 12:27 AM

Apparently, YAGNI is arrogant. I feel that given the amount of stick it's received over the last few days, I should put a word in for it.

First, let's clear away the straw men, and present a reasonable argument for "You Aren't Going to Need It".

  1. You can build it now, or you can build it later
  2. If you build it now, it will take 'x' amount of time
  3. If you build it later, it will take 'y' amount of time
  4. You don't need it now
  5. That's 'x' amount of time that could have been spent on something you do need now
  6. Or, that's 'x' amount of time earlier that you could have delivered working code, if you weren't busy doing something you didn't need to do
  7. If you do it, and it turns out you don't need it after all, that's 'x' time wasted
  8. If you don't do it and it turns out you do need it, you've wasted the difference between 'x' and 'y'
  9. Additionally, unused code, or code that is complicated due to requirements that are not yet realised:
    1. Is still a potential source of bugs
    2. Slows development
    3. Hinders maintenance

YAGNI tells us that too often we over-estimate the difference between 'x' and 'y', and we over-estimate the probability that we will actually need the thing that we envision now. Chances are that later, when we need it, it will take a completely different form because of the way it must interact with other pieces of the program that have been created since.

YAGNI is a defense against pretty much every programmer's (including my own) desire to find general solutions, even when we are being paid to do something specific. We often fail to notice that the cost of making something flexible enough to meet future requirements now is about the same as it would cost to add the same flexibility later. We vastly underestimate what a good programmer can do with a codebase given a free afternoon and a good refactoring-aware IDE.

Is it really that scary now? Stripped of the zealotry on both sides, it's a pretty simple equation. In many cases, the values of x and y are pretty close together, even indistinguishable. This is the case with most individual 'features' of a program. In many others, there are some basic design and factoring steps you can take to move them closer together and avoid problems in the future. This is why we tend to layer systems and encapsulate components.

Of course, applying any doctrine blindly is dangerous. In some situations, such as when you are publishing interfaces to a third party, you just can't bring the two values closer. You just have to bite the bullet and decide if you're willing to wear the risk of spending the time on something that you may not need. Similarly, the cost of adding things "later" that pervade an entire program, like security or robust error-handling, spirals out of control the bigger the program gets, and must be catered for from the start.

YAGNI protects us from wasting time, and protects us from over-architecting. Applied blindly, it can lead us to code ourselves into corners that take an age to dig out of. Taken to ridiculous extremes, it results in overly simplistic software that collapses under its own weight and has to be rewritten from scratch. Used rationally, however, it tells us not to implement features or architect clever "flexible" solutions just in case: it asks us to keep in mind that there is a cost to everything, and that we should prioritize development of things we know we need over things we might need later.

It's not a radical idea, really.