June 2009

« May 2009 | Main Index | Archives | July 2009 »

22
Jun

Recently in the news, a Commodore 64 emulator with a bunch of legally licensed games was rejected from the iPhone App Store. Normally this would be a simple case of “didn’t you read the license agreement?” except that apparently they had previously run the idea past Apple Europe to positive response.

I was chatting to a developer from a competing phone company at JavaOne, and he was telling me how annoying the competition found Apple's ability to turn the negatives of their platforms into positives.

The example he gave me was security. Other phone manufacturers have to go to great lengths to sandbox third-party applications, building a complex security model to defend against malware. Apple instead said ‘screw that’ and moved the security model up a level into the app store. I'm sure it's possible to get a malicious app approved, but it would involve registering as a developer and writing a potentially commercially viable app that would pass Apple's quality control, and Apple could throw the kill switch on it the moment they discovered it was malware.

This is the root of the ‘no emulators’ provision. Apple needs to control the code running on the iPhone. Emulators open the door to unapproved code. Hence emulators can not be approved.

It is likely a C64 emulator would itself protect the iPhone from malicious apps, since emulated apps already run in the sandbox of emulated hardware. Sure, Apple wants to control the content on the phone, but given the new capabilities of iPhone 3.0, how are downloadable games different from any other kind of in-app purchasable content pack? This is what happens to rules once they are written down and removed from the reasoning behind them.

Certainly, Apple could go the extra mile and build a better application sandbox for the iPhone. But this just turns into the classic software development scheduling problem: ‘Sure, we can do that. We can do anything you want. Just tell me which three features I should cut from the next release to get it done.’

Interestingly enough on the same trip I ran into a developer who was dipping his toe in Android development. He told me his second biggest frustration1 was the hardware. He was developing some cool graphical/physics demos, but even being sure that they could run smoothly on arbitrary Android phones, or even run without crashing, was turning out to be far too much work.

Once more, it's turning a weakness into a strength. Apple controls the iPhone hardware and the software that runs on it, against all the ‘hey, didn’t the open PC platform win?’ logic of the industry. Turns out that's the same logic that attracts games developers to the predictable hardware and software of consoles, despite the license hassles and limited hardware, over trying to tame the beast of PC gaming.

Originally a reddit comment

1 The first, apparently, being the primitive implementation of the Java Virtual Machine. These performance tips read like the sort of advice you'd give a 1990’s era Java developer, which makes sense once you discover the VM lacks a JIT compiler.

After WWDC

  • 8:34 AM

I spent most of last week at Apple’s Worldwide Developers Conference. WWDC one of those things I do every couple of years and the first question I always get when I mention this is ‘Why?’ As a Java developer whose only Mac coding is spare-time hobbyist playing around, what's the value to me of going to an Apple developer conference?1

The obvious answer is ‘because I learn stuff’. I can't tell you exactly what because of the blanket NDA that covers everything after the Keynote address, but I can give some idea of where I'm coming from. I've always felt that attending was valuable to my education as a general purpose nerd, but I think the reason only really became clear to me in the [Redacted] session when Bertrand Serlet described how Apple [Redacted].

I'm not going to mention any particular companies or products here, but one thing that seems to happen far too often at major keynote tech conferences is The New Direction. Some great new programming language, environment or set of APIs are unveiled as the great new way that you are going to write software in the future, but it quickly becomes obvious that the people selling you this technology simply aren't using it themselves for anything important.

One of the cool things about WWDC is that for the most part, the libraries and APIs that are unveiled to developers are the stuff that Apple has been using to develop the software that runs, and runs on the next version of Mac OS X, and now feels are mature enough to make available to third party developers. The talks are littered with examples of how a new API allowed some team to delete this much boilerplate code, or allowed them to implement one of the new features showcased in the keynote this much faster.

It makes a refreshing change. It's far more interesting for me to sit in a session about Grand Central Dispatch and learn how it has already made some application I use every day substantially more efficient, than it is to learn that some new API is conceptually better, works really well in this demo, but the vendor haven’t themselves written any shipping code that makes use of it.

So one thing WWDC provides me is a showcase of ways in which a company that controls a suite of applications, the OS those applications run on and the developer tools used to develop those applications solves some pretty substantial engineering problems, and how it turns those solutions into publicly consumable APIs.

Which, I think, is pretty damned useful.

1 Beyond simple fanboyism, which I must admit still plays a non-trivial part in my decision to attend, and the fact that I seem to be in San Francisco at around that time on other business anyway.