April 2003

« March 2003 | Main Index | Archives | May 2003 »

30
Apr

Throwing null

  • 4:21 PM

Question:

What does the following code do?

public class ThrowingNull {
    public static void main(String[] args) {
        ThrowingNull nully = new ThrowingNull();	

        try {
            nully.throwNull();
        } catch (Exception e) {
            System.out.println("Caught: " + e);
        }
    }

    public void throwNull() throws Exception {
        throw (Exception)null;	
    }
}

Answer:

Caught: java.lang.NullPointerException

A stacktrace points to the null being converted into an exception at the point it was being thrown (inside the throwNull method). I couldn't find anything explicit dealing with this case in either the Java Virtual Machine Specification, or the Java Language Specification. The closest seems to be Section 2.16.4 of the VM Spec, which lists various RuntimeException classes:

NullPointerException
An attempt has been made to use a null reference in a case where an object reference was required

Obviously, a throw clause requires an object reference (the exception), so passing null to it fits this criterion. It's more implicit than explicit though.

Update: Norman Richards points out that this is, in fact, documented in the JVM spec, under the athrow instruction. Therefore this behaviour is explicit and predictable albeit rather obscure.

If objectref is null, athrow throws a NullPointerException instead of objectref.

Today, in a fit of the kind of self-referentially ironic vanity that can only truly be mastered by those of us born on the cusp of Generations X and Y, I set up my Windows 2000 machine at work so that in the sidebar of the My Documents folder would appear an image from my webcam.

At the time, I thought “This is quite neat. I guess it's one thing I can do in Windows that I can't in OS X.”

After finishing the folder customisation, I closed the window, and called over a cow orker to show it off to. “Look at this!” I said, opening Windows Explorer. I was met with the most incredible expression of un-impressedness. Then again, even if Explorer had done what I'd asked it to, I doubt I would have got a different expression.

My careful folder customisation had been clobbered back to the default. I poked it a few times, but stubbornly, it stuck to the Microsoft party line that this was a folder for in which to store documents, not a rather nice view of Sydney Harbour.

They're not really My Documents, you see. They're Bill's.

Rank and File (yet another Blogger Without a Name) recounts a bad experience with checked exceptions:

Moving at full speed to make insane deadlines a lot of corners where cut. Exceptions became nuances rather than boons and a lot of swallowing happened.

...

...All this swallowing has left the system on precarious gound. If an error occurs, and they always do, the state of the system becomes compromised and unknown. A swallowed IOException manifests itself as a NullPointerException in subsequent requests.

Ignoring the checked/unchecked distinction for now, there are three kinds of exceptional situation:

  • The kind you can ignore. These are very rare. The only one that springs to mind right now is InterruptedException: 99% of the time it doesn't really make any difference if your Thread.sleep() has finished naturally or been interrupted.1
  • The kind you can recover from here. Maybe you need to re-establish a connection, try a different file, or just wait another five seconds and try again.
  • The kind you can't recover from here. You are now in an error state. The application will need to compensate for this error here, but you don't know how to do that here.

The only place where it is acceptable to swallow exceptions is in the first case: if the throwing of the exception means absolutely nothing to your application, that's the only time you can ignore it. Which reminds me, I often see this:

try {
    Thread.sleep(TICK);
} catch (InterruptedException e) {
    e.printStackTrace();
}

What possible use is this stack-trace? If you're going to be swallowing an exception, have the courage to swallow it silently. Either something is a problem that needs to be dealt with, or it isn't. If you can't muster that courage, if you have one of those guilty pangs every time you see an empty catch block, it's a good sign that swallowing the exception was the wrong thing to do in the first place.

For the second case, you need to have code that comprehensively deals with the error. For the third case, you need to either throw an exception yourself, or make the error part of your method's return value.

Sometimes, when you're rushed, you might just consider swallowing the exception, printing an error message, and keeping going. Maybe you don't want to have to put together a consistent package of exceptions for your application. Maybe you know there's no error handler at all. Or maybe you just don't like checked exceptions on principle. In this case, throw an unchecked exception.

If you have an error you can't swallow, that you can't deal with on the spot, and that you can't deal with higher up in your application, then your application is broken. The correct behaviour for a broken application is to crash immediately. A broken application that tries to limp onwards in an inconsistent state is a danger to itself, its data, and ultimately its users. The best way to make sure this happens is to throw an unchecked exception. If someone knows how to deal with the error, they'll catch it eventually. Otherwise, it'll cause your application (or at least the current use-case) to die without causing any further damage or corruption.

1 So much so, that I wish they'd done away with the exception entirely, and just had Thread.wasInterrupted() instead for the rare case it was important.

Classic Testing Mistakes by Brian Marick dates back to 1997, but this is the first time I've read it. It's a very good, common-sense guide to software testing.

Useful topics include the treatments of code-coverage tools (they're useful as a tool, useless as a metric), how to react to bug reports (re-examine the tests for that area of code to see how the bug slipped through, and fill any holes this reveals), and the examination of when you should, or shouldn't automate a test.

ajeru1 still thinks we're being betrayed by types. I can't help thinking (s)he is mis-using typing to make a point. Particularly, the solution proposed in the first “we have been betrayed” article is just as inappropriate for these new problems as existing OO typing is. Certainly, if the thing deciding an object's state doesn't belong inside the object, then neither can the behaviour that results from that decision.

The Naming of Cats is a difficult matter,
It isn't just one of your holiday games;
You may think at first I'm as mad as a hatter
When I tell you, a cat must have THREE DIFFERENT NAMES.
     —T S Eliot, The Naming of Cats

In Java at least, an object's type represents three different things:

  1. The messages that the object will respond to. This defines the object's interface.
  2. The code that the object will use to respond to these messages. This defines the object's implementation.2
  3. An attribute of the object, of type Class.

Sometimes, people get stuck on the third one, and give it far more importance than is really necessary. There is nothing magical about the identity of an object's class. It's just another attribute of the object. Its primary value is as a way of determining which messages we can send to an object, something that is done often enough that Java gives us the short-hand operator instanceof instead of making us call Class.isAssignableFrom(). However, the existance of this operator shouldn't make us feel that the class attribute as an attribute is any more or less important than any other.

As ajeru notes, when you use the class as a straight attribute, it's incredibly inflexible because an object's class is immutable. There are various patterns you can use to change the behaviour of an object at runtime, or even to provide an extensible interface, but there's no way to change the attribute-value of an object's class. Which is fine, because you are still free to add as many read/write attributes as you like to an object.

Some languages do allow you to change the type of an object at runtime, or in the case of Smalltalk's become:, replace one object with another. This is still a terrible solution to the state-change problem, because the moment you have more than one set of states that need to be applied concurrently, you end up with a complete mess of classes.

Problems that arise from making things part of the type that shouldn't be tend to end up being “Doctor, it hurts when I do this!” problems. If the class is a bad place to store a certain bit of information about an object (i.e. whether a person is considered an adult or not), for whatever reason, then don't store it there! Your class heirarchy shouldn't correspond exactly to a real-world taxonomy; you shouldn't have every natural language is-a represented by a type in your program. That isn't the function of a class at all.

It's not the hammer's fault that you can't hammer in a screw.

1 memo to all bloggers: please put something on your weblog homepage making it clear what your name is, or at least what you would like to be referred to as. Usernames or page titles are just so impersonal.
2 For more on interface vs implementation, you might want to read my previous article, Inheritance Taxonomy

Cat-Strangling

  • 12:14 AM

Whilst strolling across Circular Quay, userinfodevilfish and I had one of our “agree to disagree” moments. This time, though, it wasn't about religion or capitalism, it was about bagpipes.

I detest bagpipes.

Even when played perfectly, the bagpipe sounds like a cat being strangled. It is an instrument with no reason to exist. Scottish armies used the bagpipe as a way to intimidate armies with better taste in music.

If there is one musical instrument I would love to see obliterated, that's it.

In a previous post, I noted Paul Graham had described Java as an “evolutionary dead-end”. After a lot of thinking about what this means, I've come to the conclusion that it's not really such a bad thing for Java to be. After all, the crocodile is an evolutionary dead-end too, but we don't hold that against it.

Firstly, I must point out that Graham takes pains to state that he is not being directly critical of Java in calling it a dead-end, he is just noting that in one hundred years time we will consider it the end of a branch on the evolutionary tree, rather than a step towards something better. It is partly this sentiment that I take into the essay: that being a branch is not necessarily a bad thing.

Graham's first uses the “evolutionary dead-end” label on COBOL: knowing, of course, that he will get no argument from his audience that COBOL is primitive. In his words, COBOL is Neanderthal1. But we should also remember that there is another evolutionary dead-end related to the Neanderthal, a species we call Homo Sapiens. While one dead-end died out, the other managed to survive another 30,000 years and eventually take over the planet.

So what does it mean to be an evolutionary dead-end? It means, simply, to have evolved to the point at which further evolution is unnecessary. For some species, like the crocodile, this means to be so well adapted to your environment that so long as the environment remains intact, there is no need to change. Unfortunately, such a dead-end is also so well-suited to its environment that if that environment is destroyed, the species quickly becomes extinct. For other species, like Homo Sapiens, this means to have evolved to the point that you remake the environment in your own image, and thus no longer need to adapt.

If Java is an evolutionary dead-end, it is because it has taken a particular path as far as it can be taken in that direction. Any language that descends from Java will look so much like it as to be considered part of the same species, like a new breed of cat. (c.f. C#) None of the attributes that make up Java are unique, so languages that adopt them and branch in new directions will quite correctly mark their parentage from somewhere behind Java in the evolutionary tree of languages.

So what are these features that, combined, make up the path that Java has taken to evolutionary redundancy? Let's start with the good ones:

  • Smalltalk-like single-rooted object model and pervasive use of objects.
  • Strong, static typing. Dynamically typed languages seem to be winning the mind-share battle, but I've put this under the good list, because it makes Java more suitable for the niche it has found.
  • Byte-code compiled, run on a virtual machine. Java was certainly not the first to do this, but it took the concept and made it work on the consumer PC (well, at least after the VM matured and the hardware caught up with its requirements).
  • Integrated security-model to allow distinction between trusted and un-trusted code.
  • Large, standard library, with many non-essential features considered part of the language, rather than optional add-ons. (an inheritance from the scripting languages, which all do the same thing)

Java's path also includes a number of compromises. These compromises shape the language just as much as its positive features. If you do not understand why it is necessary to make compromises in design if you want to become widely used, I refer you to the classic paper, The Rise Of “Worse is Better”.

  • C-like syntax. It's not particularly well-suited to the object-centric view of the world, but it sure is familiar to anyone wanting to learn the language.
  • Primitive types. Rather than go the “everything is an object” route, Java compromised and allowed a primitives in. This can be confusing, and requires sundry hacks (object wrappers, or auto-boxing in C#) to overcome, but it is a clasic “Worse is Better” sacrifice of simplicity of interface in favour of simplicity of implementation.
  • The lack of real closures makes the language feel a little brain-dead.
  • The class library is patchy and inconsistent, especially in the classes that were written early-on, before the naming and structural conventions of the standard library were, well, standard.

The thing to remember about “Worse is Better” is that the essay was written as a tactical paper on how Lisp (both the language and the Lisp machine) was over-shadowed by Unix and C. And in 2003, much to Paul Graham's disappointment I am sure, Unix and C are still dominant. Worse-is-better works. If Java had taken the additional performance hit of closures and not having primitives (on top of the initial performance woes that almost killed it), or if it had broken backwards compatibility to fix the standard library, it might not have been adopted the way it has.

And anyway, while Java's particular combination of features and compromises may, in Graham's eyes, make it an unlikely starting-point for the next wave of language evolution, I can see evolution going on in front of me as we speak, with Java as the platform of choice from which to launch it.

1 In the original version of this essay, I claimed the Neanderthal was an ancestor of Homo Sapiens, something that has actually been proven unlikely.

Standard has to be one of the most over-used terms these days. Technologies compete against each other, and get into slanging matches over who is the most “standard”

The term is meaningless.

A software or protocol standard, by definition, is simply a published specification that goes into enough detail that two people who follow the standard will end up with software that can talk to each other.

Standard != free. If you have written a standard, there is no reason you can't charge people to read it. You can decide who is allowed to implement it. You could hold patents over its basic premises, You own the copyright or patents, you can license it however you want. So long as it's published somewhere, somehow, it's still a standard.

Standard != fair. Even if a standard isn't totally private, that doesn't mean just anyone can implement it. Some standards are published under RAND (Reasonable and Non-Discriminatory) terms. This means that anyone who wants implement the standard must be allowed to, so long as they pay the same as everyone else. This is, of course, as fair as any other economic measure. If the RAND terms specify a $100,000 licence fee, you'll only ever see a free implementation if one of the big boys chips in. If the "fair" license-fee is per deployment, then it can never be free software.

Standard != useful Just ask the SAMBA team. Which is more useful? The CIFS standard, or their battery of regression tests against real Windows implementations?

CSS2 is a standard, as is HTML4. There does not exist a single implementation that complies 100% with either standard. Some get close, but there are always gaps, and always bugs. A standard may be a useful target to work towards, an ideal, but practically it is only ever as good as the most compliant, widespread implementation.

Standard != safe. Whenever a standard has a single, dominant implementation, it's worthless. The de-facto standard will always follow that implementation, whatever the written standard says. To pick something non-controversial (i.e. not .NET), Perl is a completely free standard. You can download the source-code and from that produce your own, 100% compliant implementation. But what's the point? Perl is a living language, and the real standard isn't in the current implementation, it's in the next one. If you don't keep up with the bleeding edge, you may as well not bother.

Standard != Standard. Most standards are written in human languages, and are thus imperfect descriptions. There is always room for interpretation, and without some kind of reference implementation to compare against, two independant implementations of the same standard are almost always going to be totally incompatible. Which makes the reference (which itself encompasses a set of assumptions) far more standard than the standard itself.

Ned Batchelder pointed to The Memory Management Glossary, which is (as would be expected) a glossary of terms applicable to memory management.

I'm linking to it because I just know it's the sort of thing I'll want to be able to find in the future, and if I don't write it down, I'll forget.

Cutlery

  • 7:37 PM

I brought the fish and chips home, and sat on the floor eating them with my fingers. The fish was quite greasy, so for a while I wished that I had a knife and fork instead.

Then I realised, “You know what? I own a knife and fork. In fact, I own several!”

It'd just been so long since I'd used them that I'd forgotten...

Prevayler, for those of us coming in late, is a rather nifty Java library that allows you to make persistent, atomic updates to your in-memory object model. I recently mentioned the massive hype-overkill that is their website. For a certain class of program, it's a very useful persistence solution. As an exercise in PR, it's a disaster.

One claim made on PrevaylerIsNotADatabase is that unlike RDBMS's and OODBMS's, Prevayler does not have to worry about RAM limitations. Excuse me? Huh? OODMBS's tend to suffer when your indexes can't fit in main memory any more. Prevayler will go completely to the dogs the moment any of your object model is swapped out. Surely Prevayler has to worry more about RAM than anyone else?

I posted a politely-worded query to this effect on the page, and received the following reply from the project lead:

No. Prevayler assumes the Prevalent Hypothesis. Databases do not.

To save you following the link, the Prevalent Hypothesis is (direct quote) “That there is enough RAM to hold all business objects in your system.” That's right. Users of Prevayler don't have to worry about there being enough RAM because... we assume there will always be enough RAM!

The logic is stunning in its simple, useless circularity. I shall now walk home without my umbrella, because I can just assume it's not going to rain. On the way, I will cross the road without looking, because I can assume there will be a gap in the traffic.

Travelling without an umbrella, or crossing the road, are both perfectly reasonable things to do, provided you have determined that it is safe to do so. You can't just assume everything's going to be OK. Neither can you decide everything's OK today, and then not worry about it changing in the future. The weather and traffic are changeable. So are the memory needs of an application as it grows.

Or, as I put it on another page:

If the proponents of Prevayler were a little less inflammatory, this site wouldn't leave me with such a bad taste in my mouth. I came here this evening because I thought Prevayler suited the application I was writing, but this site annoys me so much that the one thing it does convince me, is that I want very little to do with the obvious zealots running the project. You don't need to be this hype-heavy to sell a good product. The corollory is left to the reader.

For the project mentioned above, I'm going to use Hibernate and some nice aggressive caches instead. To be fair, it wasn't just the attitude that made the decision for me: I also realised I need to be quite frugal with memory, and other people might consider an SQL-accessable database a feature. It may be a little slower than Prevayler, but hey. I can just assume it'll be fast enough, right? :)

Publishing Woes

  • 9:31 AM

I was thinking of writing something longer than a blog-entry. Thousand-word essays are all well and good, but they're also pretty disposeable, and I feel the need to stretch my legs.

Anyway, to do this, I must pick a format. I want an open format, naturally, but I also want to be able to publish to both PDF and HTML. I want the PDF to look professional, but I don't want the HTML to look like ass. Which leaves me in a bit of a bind.

Pros and Cons
FormatProsCons
XHTML
  • Readily convertible into other things
  • Very familiar, won't get in the way of writing
  • Limited vocabulary. No footnotes (which I write a lot of), no way to reference numbered floats (e.g. tables and figures). As such, the printable version would be substandard.
Docbook
  • Easily convertible into other things
  • Designed for this sort of task
  • I would have to learn it. I've got the O'Reilly reference, and it all looks very fiddly
  • Unfamiliarity would get in the way of writing
TeX
  • Makes very good printed output
  • While I don't know TeX itself, LyX is a very capable WYSIWYM editor that I've used in the past
  • While utilities exist to convert TeX or PDF back to HTML, I've never seen one that didn't make the HTML output look like an afterthought.

It seems that I might be best off writing Docbook using LyX, but while I trust that tool because it's generated nice TeX documents for me before, will it do the same with XML? With LyX/TeX, I was only ever worried about the output, the source was opaque to me. The Docbook route would require me to worry a lot more about the source.

Bother.

I just paid for another six months hosting with AVS Networks, the Australian web-hosting company who provide the hardware and bandwidth that makes The Fishbowl work. (This also marks six months since I switched from being an early adopter of Radio Userland 8, and became a Movable Type latecomer.)

The quality of a web-host is measured by how little you notice they exist, and in the six months I've been running The Fishbowl on their service, I've noticed them precisely once. That's a pretty good record by any account. Good work, AVS.

Unless there is a very good reason for it not to, software should behave predictably. Predictable software is easier to test, predictable APIs are easier to interface with, predictable programs are easier to use.

One tenet of predictability is that when you make identical queries against identical sets of data, the results should also be identical. In queries that return multiple responses, this includes the order in which the data is returned.

For example (and this is what prompted the post), always put sufficient ORDER BY clauses in your SQL queries that the data is assured to come back in the same order every time. If the ordering turns out to be a performance problem, it's time to remove them. Until then, get into the habit of putting them there by default.

The problem I was having with spam filtering in OS X's Mail.app may have a happy ending after all. I noticed that the degradation of the filters had started when I turned off training-mode. After about five seconds of poking, I found the following note in the help files:

In automatic mode, Mail will move messages to the Junk mailbox so they're out of your way and you can easily screen them. You should periodically review the messages in the Junk mailbox to make sure messages you care about aren't being identified as junk. If a message is wrongly classified, click the Not Junk button. You should also periodically delete junk messages. Correcting misidentified messages, and deleting junk messages, improves Mail's ability to correctly detect junk mail.

I, of course, haven't been deleting the junk messages, I've been moving them into a storage folder. I am willing to assume this is what's been confusing the filter into wondering what is spam, and what isn't. I've reset the filter, gone back into training mode, and I'll be sure to delete the spam in the future.

(A part of me rails against the application making me behave how it expects me to behave, but I find spam far more annoying than that, so I'm willing to put up with it for now.)

Brain Stew

  • 6:05 PM

To the tune of the Mr Ed theme song...

A host is a host, from coast to coast
and everyone talks to a host that's close
unless of course, the host that's close
is busy, hung or dead.
     — WALLOPS from an unknown Undernet IRCop

I saw that on IRC one day, and thought it was pretty nifty. I didn't write it down, but somehow I can still remember it word for word. Now, I haven't hung around Undernet regularly since early 1996, which means that little verse has been bouncing around my head for seven years.

With thousands of things like that cluttering up my brain, it's no wonder I can never remember where I put my keys.

Note: Since I was linked to from OSOpinion, I should add a pointer to the follow-up, where I found the problem, and it was at least partly my fault. Mail.app now catches 90% of my spam.

One of the big selling-points for Apple's Mail application in OS X is the adaptive spam filter. Reviewers have gone wild praising how wonderful it is, and how it gets rid of 95% of their spam.

I used to think this too. It seemed to work really well on my Powerbook, but when I changed to my iMac, despite an enormous amount of training, it seems to have lost the ability to properly decide what is, or is not spam. From observation, I decided that it probably catches about a fifth of the junk, the rest ends up in my Inbox.

So I performed a test. Rather than deleting my junk-mail, I've been storing it in a folder. Today, I stripped out the header-lines proclaiming the mail as junk, and ran it back through the junk-mail filter. Note, everything in this mailbox had previously been marked as spam, and had thus contributed to whatever database the filter uses to choose which mail to flag.

Out of 1080 of these spam emails, the filter caught 268. The remaining 812 did not register as junk. I take this to mean that either:

  1. Apple's adaptive spam-filtering algorithm is ineffectual.
  2. Spammers have worked out how to evade the filter
  3. I have an incredibly atypical mail profile.
  4. I somehow screwed up the filter with bogus data or misconfiguration.

Regardless, the difference between a filter that only catches a fifth of the spam and no filter at all is barely noticeable. It means I still have to go through and manually delete large scads of mail based on their subject and senders once or twice a day.

How is everyone else faring? I suspect my first guess is correct, and the filter is just too lenient. I can't rule out the other three options, though, without more data. (Note, this doen't mean “suggest some product that works better”, I can do that research for myself. I'm only curious about Apple's implementation right now)

The artist occasionally known as Urban_Dragon drew this the other day and sent it to me. It's really neat. Thanks Urb. :)

Update: I don't own copyright on this image, so it is not covered by the site's Creative Commons licenses.

Quote of the Day

  • 11:13 PM

Written, by me, in an IM to a cow orker: “It's one of my better ideas. Although I still think the one about putting saddles on sheep has merit.”

XML, WSDL, UDDI, but mostly BS.

—with apologies to the Disposable Heroes of HipHoprisy.

I'm engaged in a running battle with my father over web services. He believes in them. I don't.

As anyone who has received an e-mail from me recently can see, I was born in 1975. My consciousness, as far as the universe of computing goes, was born in the early 80's. When I was born, my father was working for IBM. Since then he's been through several computer companies, big and small. For a while, around the time MS-DOS 6 was new and interesting, he was managing director of Microsoft Australia, but that's another story.

I admit I'm young, but I've never seen a computing revolution imposed from above.

The revolution will not be load-balanced by Akamai
Across huge server farms to maintain the proper bandwidth.
The revolution will not bring you .jpgs of Bill Gates
Giving a Powerpoint presentation with Steve
Ballmer, Jeff Raikes, and Craig Mundie to demonstrate
How .NET will change your computing experience.

The revolution will not be webcast.

The Revolution Will Not Be Webcast, a Slashdot comment.

The personal computer? The Internet? The web? Trace the threads back, and in each case you find a bunch of enthusiastic nerds. You find a technological base that made people think “Wow, cool!”, and on top of that wonders were created. It doesn't work the other way, of course, you can't just bet on any “Wow, cool!” technological base and bet you'll be the next Marc Andresson.

In fact, it'd be unfair of me to list the failed attempts at industry giants to impose revolution simply because each could be countered with an underdog, grass-roots technology failing to achieve the same thing.

Even Apple, who are ultimately responsible for the popular adoption of many of the major advances in personal computing (while they didn't invent the GUI, WYSIWYG, desktop publishing, or more recently, stylish hardware, they've introduced each to the general public), have always done so from the underdog's position.

P2P came from a bunch of people who wanted to send messages to each other, and was cemented by people copying music. Grid computing arose from a bunch of nerds who wanted to boast about how fast their computers were.

What I'm yet to see, is a revolution imposed from above. I'm yet to see the big companies that control the industry saying “this will be the next big thing”, and have them be right. I've seen them occasionally muscle their way in on the next big thing post facto, but they've never been there a priori. Revolutions always solve the problems of implementors. People trying to impose revolutions are always looking to solve their own problems first, which are generally out of step with the people they want to use their products.

Look at Java, recently described on Bruce Eckel's weblog thusly, citing Paul Prescod (Update although it seems that the speech was actually given by Paul ‘If it's not LISP, it's not a real language’ Graham): “He called COBOL and Java neanderthal languages that have no descendents on the evolutionary tree.”. Java has great libraries and right now, great momentum, but it's a dead-end. It has no future. It has nothing to evolve into. Its only likely long-lived descendant is C#, a language that, if it survives, will do so for the same reasons that Visual Basic survived so far beyond BASIC's use-by date.

This is the source of my disquiet about Web Services. Microsoft are telling me they'll be big. IBM are telling me they'll be big. Some very respected developers are enthusiastic, but most are sitting back wondering what the fuss is, and have been for three or four years now. The momentum just hasn't gathered. SOAP and XML-RPC are both great solutions to a particular range of problems, but we're just going to have to face the fact that the chance of them becoming a revolution, as promised, are slim.

Shortly, some technology is going to appear and blow my socks off. But it's more likely to appear in some experimental corner of JBoss 4.0 than it is in J2EE 1.4 or in .NET. And it's quite likely going to appear in Python or Ruby, or even coded in C by some college student or lab assistant who has thought of a really neat way to solve a real problem in the real world, and wants to share that solution with the rest of us. And we'll take it, and use it in ways the inventor never dreamed of. That's where revolutions come from.