NullPointerException

by Charles Miller on January 10, 2003

Jeremy Zawodny has discovered the NullPointerException, and laughs because he's been told that Java is “the language without pointers”. Claiming that Java is a language without pointers is, of course, rubbish. All variables that refer to objects are references (i.e. pointers) to that object, which resides somewhere in the heap.

What Java lacks, is pointer arithmetic. The only operation that is permitted on a pointer is dereferencing (the ‘.’ operator), which allows you to perform actions on the object being pointed to. There are two reasons for this. Firstly, it frees the JVM to do memory-management how it wants, and not have to be a slave to the “pointer equals memory address” thing. Secondly, and more importantly, it prevents a large class of memory-corrupting, security-destroying, application-crashing bugs that can result from direct manipulation of pointers.

Which brings us to null. Null is the value for a reference which means “this reference does not point to any object”. If you try to dereference null, you get a NullPointerException.

Some languages treat null differently. Objective-C's nil is a reference to the null object, which responds to any message you throw at it with nil. This makes an unassigned reference something like a black hole, you can send anything in but you'll get nothing back. Smalltalk gives you both worlds: nil is a singleton that passes every message to doesNotUnderstand. In development environments this throws up the debugger when it's reached, but common practice is to redefine nil to observe the Objective-C behaviour in production.

Some people prefer the Smalltalk/Obj-C approach, although the argument is based on something of a straw-man example of Java code. I prefer the Java approach, and my reasoning goes back to programming by intention. (See also: Programming by Coincidence)

One good piece of advice from the Pragmatic Programmers is: “Crash Early: a dead program normally does a lot less damage than a crippled one.” Most of the time, sending a message to a null reference is a mistake. Most of the time, you expect there to be a real object at the end of your message, and if there's no object there, it's quite likely because you forgot to put it there, or mistyped an earlier assignment. If a message to the null object just returns null, that mistake isn't going to be flagged anywhere near where the problem is. It's just going to introduce some crawling data corruption which will spread throughout your program until you have nulls all over the place, each the result of a call to one of the previously created nulls. Eventually something will break, but it'll break unpredictably, a very long way (in the code) from the original error.

I am aware that many people disagree on me with this point, and that with sufficient unit tests, it's much less of a problem. I'm not trying to start a flame-war, this is just my opinion, and my experience. It's also the way nulls work in Java, so while we're being paid to hack in that language, we're stuck with it anyway, so we may as well remind ourselves of its rationale.

So, without further ado, here are Charles' guidelines for dealing with how null works in Java:

  1. Before you start writing a method, decide whether it will ever return null. Document this decision in the @return section of the method's Javadoc.
  2. Never have a method return null unless there's a really good reason for it.
  3. If your method returns an array or a collection, there's no reason to ever return null. Return an empty array or collection instead.
  4. If you have a situation where null is a valid value for a variable, you can make use of the Introduce Null Object refactoring, to make the behaviour explicit.

(Obviously, this post was written long, long before I encountered the “Option” type.)

Previously: Legend

Next: Avoid Standardsism