May 2003

« April 2003 | Main Index | Archives | June 2003 »

31
May

I was waiting for the ferry, standing at Milsons Point Jetty, a wide wooden staircase leading down into the harbour. As the tide rises and falls, it consumes the steps and releases them, leaving ample leeway for the ferry to rest its gang-plank on whichever tier is just the right height above the water.

The tide was just below the second step. Wave after wave lifted the surface of the harbour and slapped it against the bottom of the wooden planks, spurting water through the gaps between them. Slapping the jetty again and again. Again and again. The wood showed signs of wear, it had been through this day after day.

Inexorable, the harbour is going nowhere, it has nothing else to do. It will beat on this jetty day after day until it is broken, until it is out of its way. Such forces have hewn valleys between mountains, this little structure of wood and steel will be beaten down if the harbour has to wait until the end of civilisation to do it. And what has civilisation been, but a blink in the water's eye, a ripple in history?

Impossible performance art.

Take a city, the size of Sydney. Take Sydney itself. Abandon it, Marie-Celeste style. On the stroke of 6pm, Friday night, everybody puts down their cutlery and walks out. Walks out down the middle of the roads strewn with abandoned cars, their motors still running idle, disturbing nothing as they leave. In an instant, the city as we define it, as a metropolis of human beings, dies. The last person out shuts off the power grid.

Then document the reclamation of the land. Like some Seven-Up series, we return each year to document the ravages of the vermin, the rodents and scavengers that clean up what was abandoned. The larger predators that follow them in and make the city their home (all but the biggest predator of all, of course). Trees escape the narrow allotments they were confined in, and break the asphalt with their roots. Document how the city falls when no-one remains to sustain it.

And, in timelapse, the decay of that step, as the tide slaps against it until it is no more.

Name: The Ghetto

Context:

You are in some way subject to architectural, framework or language constraints that force you to write ugly code. For example, your UI framework requires one kind of object, your persistence framework requires another, and you keep having to convert between the two.

Forces:

  • You will not always be in a position to remove the root cause of the ugliness.
  • Ugly code is usually boiler-plate, and not very interesting.
  • It is better not to have ugly, boiler-plate code obscuring code that actually does something interesting.
  • Of course, [insert-your-favourite-language-or-framework-here] is immune to this problem. Obviously this pattern does not apply to you.

Therefore

Hide your ugly code inside a Ghetto. The ghetto is a single file or class where issues of code cleanliness do not apply. It is entered by reputable developers with no small amount of trepidation, and left as quickly as possible. On the other hand, it does the job, and it keeps the bad elements away from more cultured code.

If the constraints that caused you to build a Ghetto are widespread, though, it may end up being the biggest single file in your application.

Examples of use:

None that this author is willing to admit to.

If a tree falls in an application, and nobody is around to hear it, is it logged?

I recently posted this to the XP mailing-list. It's pretty basic stuff, but I figured I'd put it here in case I needed to find it again later.

banshee858 propagated the following meme:

Suppose I am driving my car to work and I am stopped at an itersection[sic]. My location, that is the names of the two streets is data. However, the metadata to my location could be the year of my car, name of the car, how many people are in the car and/or their names.

No.

Metadata is “data about data”.

For example: "2" is data. "The number of people in the car" is meta- data. It provides meaning to the number "2" and places it in some kind of context.

The confusion comes from the fact that “metadata + data == data”.

“There are 2 people in the car” is a single piece of data that is self-describing because it combines a piece of data (2) with a piece of metadata (this is the number of people in the car). Together, though, it becomes data again.

Thus, you can layer metadata miles high. A Java method is a whole heap of data and metadata bundled together in such a way as it instructs the computer to do something. Then with a tool like XDoclet, I can add some further metadata to this method to say “this method is part of the remote interface of the FooEnterpriseBean”. That's metadata again.

Lisp exploits this layering of data and metadata by treating code as data (and vice versa). This way, at the most basic level, you can take advantage of the abstraction-building techniques of combining data with metadata, and layering the whole thing up into a program.

It's a very pure form of program-building, and helps explain why Lisp programmers are even worse than Smalltalk programmers in the “Why the Hell did my perfect language fail?” department.

XML was also designed to be metadata-rich and self-describing, although most people actually producing XML ignore this (as anyone who has had the pleasure to do a CVS merge on the XML generated by Websphere Studio will attest to). This also leads to the fact that most XML-based programming languages end up looking like a poor imitation of Lisp.

What Bridge?

  • 2:38 PM

During Danna's first trip down to Sydney, we did the obligatory walk around Circular Quay to get photographs of her in front of the various landmarks. At the Opera House, I told her to sit up on the railing so I could get a shot with the bridge behind her, to which she replied: “What bridge?”

I just stared at her speechlessly until she worked it out. It's been “What Bridge?” to me ever since.

Anyway, today after dropping into the bakery to grab lunch, I decided to walk across the bridge and take some photos. I should do it again some time when it's more sunny. Meanwhile you can find the photographs in question here.

Quality In Depth

  • 11:45 AM

Rob commented on my Degrees of ‘Works’ thus:

The next scary bit is how easy it is to think that you have something at a level 5, when really it didn't reach two....

How do you go about improving that.

As well as being a programmer, I am our company's network security guy. One of the central tenets of network security is defense in depth. Defense in depth is born of the inherent paranoia that comes from trying to secure a network, combined with a strong fear of the consequences of failure. It goes something like this:

Keeping the server patched may not find everything. A firewall may be circumvented. Vulnerability scanning is pretty useless on its own. And above all else, I could screw up.

Therefore, security is layered such that if one layer is got past, or screwed up, the next will stop it. So I keep my servers patched, have firewalls on my trust boundaries, and scan occasionally in case I've missed something.1 How much of this I do depends on how effective each is, and how much it's going to cost. And in turn, how much you spend on any security measure depends on how much you project you'd lose if you didn't have it.

So anyway, to bring this back to the point, the answer to ‘How do we ensure we're up the good end of the “works” spectrum?’ lies in Quality in Depth. You need automated unit tests. And functional tests. And a regular automated build. And tests against that automated build. And knowledgable testers whose job it is to find new ways to break the application. And useability testing.

Above all, you need a culture where quality is considered important. (This, once again, parallels security.) If the company doesn't value quality, you're never going to achieve it because everyone will be looking for ways to work around whatever measures you put in place, and nobody will ever be called to account for doing so. People have to want to produce quality software, and to do so there needs to be a culture where people are enthusiastic about testing, and where breaking the build or having a bug filed against you is a spur to fix the problem immediately, and make sure it doesn't happen again.

Unfortunately, software engineering is often an enterprise where quality is not the highest priority. All of the above measures cost money, and it's hard to demonstrate in a concrete fashion whether any of them actually saves the enterprise more than it costs in the long run. How do you measure the impact of a large bug, against the advantage of being quickest to market, for example?

1 Obviously I've skipped a lot of measures, such as issues of employee education and trust, here for the sake of brevity.

note I'm not incredibly happy how this essay turned out, but not quite unhappy enough to can it entirely. I'm going to have to rewrite it some day, though, when my mind is more focused.

Any confusion of a higher level with a lower level is a capital offence.

  1. Compiles on my machine
  2. Compiles
  3. Starts up on my machine
  4. Starts up
  5. Works on my machine
  6. Works
  7. Works acceptably

There are two kinds of people in the world. Producers of technology, and consumers of technology.

In a perfect world, consumers of technology shouldn't have to care about it. The technology should serve its users. I don't want to have to care about my toasted sandwich maker, and I shouldn't. Toasted sandwich makers should be designed so I can consider them trivial and beneath my need to understand.

Of course, it's not a perfect world. I have to know something about my toasted sandwich maker, like not to immerse it in water, especially when it's plugged in. The ideal, however remains that I should need to know as little as possible. It's use should be simple, efficient and intuitive. It should just work.

In order for this to happen, though, I expect the people who design toasted sandwich makers for a living to know a hell of a lot about them. This is fundamentally necessary. If it were made by some guy who'd read a pamphlet one evening about how to build an electrical appliance, it'd be just as likely to blow up in my face as do anything useful.

People who produce technology should respect technology.

Respect means taking the time to understand it.

I think sometimes, computer programmers lose track of this fact.

To use a term that, thankfully, seems to have gone out of its short fashion, we are in the business of providing ‘solutions’. In order to solve somebody's problem, we must not only understand the problem, we must also have a good knowledge of the possible solutions, and the prior art that has gone into solving that sort of problem in the past. And where we do not have that knowledge (Hell, I've only been a professional programmer for five years, the gaps in my knowledge are staggering), we must recognise our lack, and either work to fill it, or seek the advice of those who know more.

It astounds me, for example, how many people try to write web applications without knowing much HTTP. I don't see how it's possible. Sure, we have a bunch of abstractions that sit on top of our HTTP servers, but we all know that abstractions leak, and when they start leaking, you're lost if you don't understand the plumbing beneath them. A designer of applications that are transported over HTTP should have an intimate knowledge of HTTP, and probably a pretty good idea how TCP/IP works, for good measure.

What scares me even more is people who try to simplify, or abstract away some protocol or technology without first demonstrating an understanding of what they're trying to simplify. If you don't really understand all that is there, you're certainly not qualified to throw any of it away.

Otherwise, how do you know your toaster isn't going to blow up?

Mortal Hacking

  • 12:25 PM

Sometimes I think it would be really cool to have my working day commentated by the ghostly voice from Mortal Kombat. Whenever I finish a difficult feature, it would be particularly gratifying to hear Charles Wins. And if it was a particularly cool hack: Charles Wins. Flawless Victory.

Of course, the down-side would be when I was having a bad day.

Java Wins. Fatality!

Oh well, there goes that idea.

Round Two. Fight!

Do not, do not, do not start a public Open Source project unless you already have:

  1. Working code that does a useful and/or interesting subset of the project's goal
  2. An automated build
  3. Sufficient instructions to get the program running

This rant was brought to you by one too many Google searches that ended up on a Sourceforge or Savannah project that was started in 2001, updated for a month and then abandoned.

This rant was brought to you by one too many really interesting-looking projects that thought it was a good idea to have a design phase that was open to public comment, and as a result never produced anything.

Getting people to contribute to Open Source is hard. If you do not have sufficient motivation1 to take the first, important steps yourself, nobody else is going to. At most, you'll attract a bunch of other people who, like you, want to talk about the project instead of coding it.

A bit harsh, perhaps, but true.

1 Edit: This originally read ‘skill’, but was changed after publication to better reflect what I meant to say in the first place.

It's an interesting place, the web.

I was looking over my referer logs the other day, and I saw a new link to my old article about what's wrong with Instant Messaging. Following it, I found somebody's class assignment: the author had linked to my article, but I thought had misrepresented in her link text what I was actually saying. So I sent off a quick email to that effect, clarifying what I had really meant.

This, of course, took the poor student somewhat by surprise. Not the least because I tend to write rather formally to strangers, and it may have (unintentionally) sounded like I was annoyed. But anyway, you don't really expect, when you reference someone's writings in your obscure school assignment, to have the author write back and correct you.

Thanks to the referer header, linking on the web is not a passive reference, but an invitation to converse. A site may ignore the invitation, but it's always there, fed by each browser that follows the link. It's one of the things that makes the web a social arena instead of just a publishing medium.

Sufficient linking, as David Weinberger observed in Small Pieces, Loosely Joined, creates spontaneous communities, as people with related interests engage in more extended conversations.

I like it that way. If publishing on the web were merely throwing your links into the void, it wouldn't be nearly as much fun.

In The Matrix: Reloaded Agent Smith attaches himself to and invades other entities in The Matrix, who then are forced to become him. This allows him to replicate over and over again, and thrive.

It only occurred to me this morning.

Agent Smith has re-licensed himself under the GPL.

Suddenly it all makes sense.

I'm going to see The Matrix Reloaded in about two and a half hours. Just now, I was reading JWZ's blog and discovered that it contained evidence of the first ever technically accurate computer-hacking scene in a movie.

I kid you not.

In order to break into a computer, a movie character scans it with nmap, and then runs a known SSH1 root exploit on it. All on a plain green-screen terminal without fancy graphics.

Colour me amazed.

Of course, then I thought: “The only place you get technically accurate use of computers in a movie, is when according to the story, the world it's happening in isn't real.” The mind boggles.

What is a closure?

A closure is an anonymous function that ‘closes’ over its surrounding scope. Thus when the function defined by the closure is executed, it has access to all the local variables that were in scope when it was created.

Closures originated in Lisp, and have made appearences in a number of languages since, but for the purpose of this post, I shall use Ruby for my examples. Ruby was designed to use closures pervasively, and a number of its other design decisions reflect this. Note that the example I use here is a little artificial (Ruby's IO already has a grep method, it is just used a different way)

What are Closures Useful For?

The two most common examples of uses for closures lie in iterating over lists, and in encapsulating operations that must be set up, and then cleaned up after. Here's an example that does both:

IO.foreach("foo.txt") do |line| 
	if (line =~ /total: (\d+)/)
		puts $1;
	end
end

This code searches a file, and prints the matches. The IO.foreach takes care of opening the file, delivering each line to our closure, then closing the file when we're done.

We can do better than this, though. If we find that searching a file for regular expression matches, then operating on the result is something we do often, we can create a new method that encapsulates that more detailed operation:

class File
	def File.grep(fileName, pattern)
		IO.foreach(fileName) do |line|
			if md = pattern.match(line)
				yield md;
			end
		end
	end
end
	
File.grep("foo.txt", /total: (\d+)/) { |md| puts md[1]; }

Here, md contains a MatchData object that describes the regular expression match, and the yield statement passes that match data into the closure. Now our file search takes a single line.

The advantage of closures is that they allow you to add new control structures to your language. Java 1.5 is looking to introduce a foreach construct to iterate over lists. With closures, such language constructs become unnecessary (Ruby has only very primitive native looping constructs) because you can define your own. Similarly, closures allow you to dispense with boiler-plate wrapping code such as we see everywhere with file or database manipulation in Java.

Closures are also great for implementing the Command pattern. Command implementations that use objects must explicitly have their state set up for them, closures can just close over whatever state is around when they are created.

Blocks in Java

You can approximate the functionality of closures in Java using anonymous inner classes. There are two problems with this, though. Firstly, anonymous inner classes are unnecessarily verbose. Closures are supposed to be a short-cut, and anonymous inner-classes are anything but. This can be got around by adding some syntactic sugar to the language of course, but the use of classes is still quite heavy-weight. (Each anonymous inner class is an additional compilation unit, for example)

The bigger problem is that anonymous inner classes in Java don't really close over their surrounding scope—they cheat. The best way to demonstrate this by example is this:

i = 1;
1.upto(100) { |num| i *= num; }
puts i;

The above code prints out the factorial of 100. As you can see, the variable i is modified inside the closure, but because the closure is executed in the surrounding scope, its value is changed afterwards. This is unlike the behaviour of a regular function call, where variable values in surrounding scopes are not changed.

If you tried to write the equivalent code in Java, it wouldn't compile. Java doesn't really close over the surrounding scope, it copies it. To hide this implementation detail, any variable referenced inside the inner class must be declared final outside it: which is fine if your inner class is manipulating a mutable reference type like a list, but breaks if you want to work with immutable reference types, or value types. (i.e. Strings and numbers)

This is why you're unlikely to see closures in Java, sadly. To implement them properly would involve making changes to some pretty fundamental parts of the JVM. It's a pity, though, because they're damn useful.

Random Links:

This post on Hyatt's blog, and the ensuing discussion pretty much sums up the problem trying to make a cross-platform application look native. A few quotes:

hyatt:

It's kind of amazing to think that, because of Internet Explorer's dominance, the very way widgets have to be designed in order to avoid bad page layout must necessarily match the way widgets are designed on Windows.

jwz

This argument is interminable, and we've had it since 1994, back when there were actually three platforms (rather than today when there's one platform, plus 1/10th of a platform, and oh, over here there's 1/50th of a platform too.) When someone decides to buy a Brand_X computer, part of their decision in buying it is how it works: people don't buy Windows boxes because they want Macs, nor do people buy Macs because they want Windows.

There's a lot to think about in this for Swing programmers, too. If your application has been laid out on a Windows box, chances are it's going to look wrong on OS X. At worst, you're going to find a lot of widgets fighting for space, and cropping text because the OS X widgets are just that much bigger and rounder than their Windows counterparts.

Even if the layout isn't breaking, you're still not going to look comfortable on the platform because the Apple Human Interface Guidelines are very strict about how applications are laid out. Simple things like that make an application look jarring, and feel uncomfortable to use. Simple things like that are one of the reasons people will always go for a native application over a cross-platform one, or even a ported one (since ports inherit the assumptions of their parent platform).

It's not a problem on Linux, of course, because X users are used to every application looking different. :)

Since joining the Church of Macintosh, I have found peace and fulfilment. Finally, after years in the computing wilderness, I finally feel I have found somewhere that I belong. Don't you want to belong?

Comparing Apple to the Church of Scientology
Scientology Apple
Banned in Germany Not banned in Germany
Believe that our destinies have been shaped by an ancient alien warlord trapped in a volcano Believe that PowerPC chips can compete with Intel on price and performance
Believe they can measure your well-being through the electrical conductivity of your skin. Believe they can alter your well-being based on what colour your computer is.
Will take large amounts of your money and brainwash you. Will take large amounts of your money and brainwash you... but at least you get to keep the computer.
Each stage you reach, you will find yourself needing more expensive treatment. Each product you buy, you will find yourself wanting more and more expensive hardware.
Defectors from the church are considered non-people, and may be targeted for abuse. Defectors from the platform are considered mad, and will be flamed.
Celebrity endorsers include Tom Cruise and John Travolta Celebrity endorsers include Trent Reznor and De La Soul
Followers talk endlessly about how the church's treatments saved their lives. Followers talk endlessly about how they hooked up to their friend's iPod over wi-fi.
Targets people who seem lost and confused, and offers them a personality test. Targets people who seem lost and confused, and makes them star in "Switch" ads

Quote of the Day

  • 12:50 PM

Over the corporate IM...

[Charles] That's more cynical than I would expect from you.
[Alan] This week is Cynics Week
[Charles] Originally, it was going to be apathy week, but nobody cared.
[Alan] Well, we cared, we just didn't tell anyone. yet.
[Charles] No, that's procrastination week. That's next month.
[Alan] Can't we have it in July instead?
[Charles] Yeah why not.

I really don't like living in a world where this sort of shit happens.

“...it's funny that it's now become possible to use Linux and still feel like you're selling out, but there you go.”Kief, on being forced to support DeadRat.

It's a constant source of amazement to me that vendors pushing their server-side products on Linux don't support Debian. Debian is by far the distribution most favoured by the people who actually have to administer the boxes. (Alan tells me that Gentoo is pretty good too, but I've not used it yet). On the other hand, I've had a single sysadmin tell me they prefer RedHat. It's always, like Kief's situation, been forced upon them by circumstances.

If I were porting some server to Linux, I'd target Debian first, and then look at other distributions. Then again, there are pretty good reasons important decisions are not left to me.

Why can't people understand
I've got a short attention span
Short attention spaaa-aan

    —Short Attention Span by the Fizzy Bangers1

After eight years of being an Internet nerd, I've finally completely lost the ability to concentrate on one thing at a time. I'm so used to having the web browser open in one window, IRC in another, an open IDE, the TV on (sometimes with the sound muted so I can listen to the radio or mp3s). I really need to rediscover my ability to focus, to turn off everything else and go back to a single thing for a significant period of time.

I'm fine when I'm forced. When I'm in a cinema or the theatre, there are no distractions and I can concentrate. But put me in an environment where I can multi-task, and that's exactly what I'll be doing.

A cow-orker lent me a DVD. Another cow-orker lent me a small pile of books. I'm having trouble with both of them, because they both require me to do just that: drop everything and just read, or just watch TV.

I need to fix this. It's bad.

1 Yes, that's the whole song. It's about ten seconds long. It came on a CD called Short Music for Short People which is a really fun collection of 101 30-seconds-or-less punk rock songs.

As an interesting aside, if you remember Apple's original Rip, Mix, Burn adverts, they featured the (conveniently advertisement-length) track by The Ataris, The Radio Still Sucks, but with lyrics significantly toned-down from the original.

Update: It seems that accidentally typing <?p> will confuse MT's parser, but it won't tell you that the page didn't render correctly when it's actually rendering the thing. Fixed now.

Static typing has taken a lot of stick lately, so I feel the irresistable urge to put forth my views on the subject. That's what blogging's about, after all, inflicting your opinions on the world. (Unless you're a sixteen-year-old goth girl, in which case you should be posting really bad angsty poetry to your Livejournal instead)

Firstly, I agree with Bruce Eckel. Static typing is a form of testing. As a form of testing, it's particularly restrictive on the programmer, and forces the programmer to test all sorts of things they probably shouldn't have to: remembering the unit testing adage that you should only test those things that could possibly break.

There are difficulties, however, with going from that premise, to the conclusion that testing can give you the same benefits as strong typing, but without the disadvantages. The difficulties lie in the difference between testing-through-static-typing, and testing-through-writing-tests.

I'd like to put my theories into practice. While my pre-Java background was in scripting languages, I haven't really put together a significant application in a dynamically-typed language. I've done enough programming to understand and experience many of the benefits of dynamic typing in making the code smaller and more flexible, but I'm not really speaking from a wealth of experience with the alternatives here. I just have my doubts that they're as fantastic as I've heard.

One place where dynamic typing has truly 0wned me, however, has been the Cocoa framework for OS X. Cocoa have shown me that programming a GUI doesn't have to be an exercise in banging my head against a brick wall, it can actually be fun. A lot of the flexibility of Cocoa comes from the dynamic nature of Objective-C. If you've got a Mac and you haven't learned Cocoa yet, set aside a week to go through a tutorial or two. You won't be disappointed.

Anyway, back to my defense of static typing. You knew it was coming.

Static typing is declarative. Testing is procedural. Thus, when your program fails through types, the exact location of the error can be immediately ascertained: it's the point at which your type declaration becomes untrue. When your program fails a regular test, you only find the point at which the testing procedure detects the resulting misbehaviour.

To Java programmers, think of chasing down NullPointerExceptions. Usually it's quite easy, but because the error (dereferencing null) can happen a long way from the actual bug (assigning the value to null in the first place), it can take a lot of time to track down the cause.

This isn't really so much of a problem in the normal course of writing an application. Assigning null to a variable is far more common than assigning the wrong type entirely. To paraphrase someone whose identity I have forgotten, who was arguing against the usefulness of generics in Java, “I find I almost never put an orange in a bag of apples anyway”. Where it becomes important is during refactoring. When changing method signatures or moving behaviour between classes, it's very nice to have immediate feedback on exactly which lines of the program are effected by the changes.

The other reason I tend to steer towards statically typed languages for my own projects puts me in agreement with Carlos. In a dynamically typed program, it's easy for a human to tell what type something is likely to be, but no way for a machine to say for sure what type something is. Thus searches, code-assistance and refactoring tools for dynamically typed languages must, at some point, guess. These are all tools I rely on frequently, and want to work with as little of my interference as possible.

I'm aware the Refactoring Browser originated in Smalltalk. What I fail to understand is how truly automatic refactoring is possible when types are indeterminate.

Being able to discover precisely where (or if) a type or method is referenced is invaluable. A text-search can help, but you must sift through the false-positives yourself. This requires a certain familiarity with the code, and as the code-base gets bigger (or your familiarity with it wanes for some other reason), that sifting takes longer and longer.

I work on my own, personal projects maybe one or two days a week. I tend to have four or five hanging around, so some will go months without me looking at them. When I return to a project after that amount of time, the information that the IDE can glean from the type system is invaluable.

On the other hand, my couple of Ruby projects languish far longer, because it ends up being a lot harder for me to pick them up after a long absence, because I have forgotten all the type information that is implicit in the program.

The IDE's understanding of types can also cause it to save me at least as many keystrokes as the type information causes me to endure. For example, when auto-completing a method, Eclipse will check the local scope for objects of the same type as the arguments, and include them in the completion. Similar guesswork is performed when using macros to generate loops. IDEA has similar features. It will even recommend simple variable names for me based on their type.

It's also amazing how quickly you can remember the workings of a half-forgotten API with a sketchy glance at the documentation and an IDE with type-informative code-completion.

As far as I'm concerned, the more things I can make the IDE remember or do for me, the more my mind is able to concentrate on more important things that it can't do, like writing the program itself.

Quote of the day

  • 12:45 PM

“I say we take off and nuke the entire site from orbit. It's the only way to be sure.” —Ripley, in Aliens.

To everyone involved in writing XML-based programming languages, and by this I mean those languages where XML is the primary syntax, not useful things like ECMAScript that just happen to be applied to XML...

Please stop.

XML, when you get down to it, is a really verbose way to represent Lisp S-Expressions. XML's expressiveness is well-designed to mark up text, which is what it was designed to do. Any programming language based on XML, however, will just end up looking like a really clumsy attempt to rewrite bits of Lisp with angle-brackets. The world really doesn't need any more of these.

In contrast:

In one of my personal, never-see-the-light-of-day projects. One of the things I was delaying writing for this program was the configuration file: I just had a TestLauncher class that got all the right objects together and launched the application with some hard-coded defaults.

Last night, I decided that enough was enough, and started throwing together an example XML configuration file that I could use as a basis to building a real configuration framework for the application. It's a reflex, you see. We're trained to think XML whenever we have to put data anywhere. When I was halfway through, I started thinking about all the dependencies I was about to introduce: the chain of “Commons-Digester depends on Commons-BeanUtils” and so on. I'd much rather have something more lightweight.

I'd already been thinking about the connection between XML and Lisp. I was also realising that in converting my ‘launcher’ into a configuration file, I wasn't writing something that was configuring my application so much as something that was programming it. And you can get a self-contained, standards-compliant Scheme implementation for Java in a 200k download. So as a thought experiment, I re-wrote the configuration file in pseudo-Lisp.

Not only was it significantly less bloated and more readable, it opened up a number of doors that just weren't conveniently open before. SISC code can call out to Java objects, which meant that rather than having a configuration file and a Java framework to interpret it, turn it into objects and then use the objects to configure the application, the configuration file could configure the application directly.

Eventually, I compromised: the configuration system would consist of two Scheme files: the configuration file itself, and a library file that is parsed before the configuration, that maps expressions in the configuration to calls on the Java objects themselves (plus a few helper functions along the way). That way, adding extra configuration options to the system simply means exposing them as Java methods, and then writing the glue code in Scheme to allow the config file to set them. I suspect (without evidence so far) this will be faster than the other way by an order of magnitude.

The two-file means that the configuration file itself will not have to contain anything that looks like a program, just a series of assertions like...

; set up the user database
(add-module UserDatabase "com.example.HibernateUserDB")
(UserDatabase config-file "hibernate.cfg")

; initialise the command framework
(command-search-order (
        "org.pastiche.commands"
        "org.pastiche.util.commands"))
(add-command "LoginCommand")
(add-command "SetPreferencesCommand")

No, I'm aware this isn't a new idea. I'm just surprised it's not done more often.

The dynamic typing of the Lisp system helps as well: the configuration file is flexibly glued to the program instead of nailed on: which means less work maintaining it as more modules are added, whatever their types are.

Note, I haven't tried this yet, I'm just getting the idea down in ones and zeros before attempting it. If it turns out to be a total mess, I'll be sure to blog my failure.

Link stolen from JWZ

Schemix is a Scheme system, implemented as a patch to the Linux kernel. It aims to attain R5RS compliance while remaining small, fast and easy to understand.

The intended use of Schemix is for exploration of the Linux kernel and for rapid, interactive prototyping of Linux drivers and other new kernel features. To achieve this, Schemix will attempt to make a large subset of the kernel functionality available to Scheme programs. Interactivity is via a character device, /dev/schemix which presents a REPL (Read, Eval, Print Loop) to anyone having access to the device.

 $ echo "(display (+ 1 2 3))" > /dev/schemix
 $ cat /dev/schemix
 6
 $ cat > /dev/schemix
 (define foo (kernel-lambda (char*) printk))
 (foo "Blah, blah, blah")
 ^D
 $ dmesg | tail -n 1
 Blah, blah, blah

I'll probably never use it, but that just has such a high nerd-cool factor. I also like the quote from lower down on the page: “...prototyping is basically the act of making lots of mistakes until, eventually, you make the right mistake and call it a finished product.”

X-Men 2 had a good plot, strong villains, good F/X and action sequences (both unfortunately overshadowed by Matrix Reloaded anticipation), and good performances by all the lead characters. What it desperately needed was a decisive, ruthless script editor.

I only saw the first X-Men movie once, back when it was first released in cinemas, but as far as I remember it was largely the story of Wolverine and Rogue. The roles of the other characters were reduced mostly down to their points of interaction with those two. The main plot was then built around that focus.

The sequel doesn't have any focus. The movie has a strong main plot, but it is drowned in a plethora of sub-plots. Too many characters battle for our attention, and eventually it leaves far too little time to focus on any one of them in sufficient detail to engage us. At one point, we have to follow four different groups of good guys and two different groups of bad guys at the same time.

What this means is that there is an awful lot in the movie that seems superfluous or that is introduced and then not developed in any significant way. Even the important things contend with each other to the extent that they are all cut far too short: like Wolverine's climactic battle which seemed rushed to its conclusion, or Rogue's last-minute rescue which obviously ended up mostly on the cutting-room floor.

The script-writers should have chosen two (or at most three) sub-plots, and then jettisoned the rest. Maybe this would have upset those looking for the movie to focus on their favourite character from the comic-book, but what satisfaction is there in those morsel-size doses of half-digested sub-plot?

Still, the movie is entertaining, and has a pair of really cool set-pieces in the first act. It just loses its way from the second act onwards.

Three out of five. See it on cheap-night.