This is the Director’s Commentary Track for a Twitter hashtag reply.
As the 80s progressed, the program my brother and I would type into Commodore 64s in department stores when nobody was looking got longer and more complicated.
I vaguely remember the ultimate version asked you for your name, then asked you if you were an idiot. If you entered "Y" it would print out "(name) is an idiot!!!" a few hundred times. If you said "N" it would print out "(name) is a liar!!!"
Then it would clear the screen and return to the first prompt. GOTO 10.
(2) 1997: Miranda
After dropping out of Law and spending a year “finding myself”, I decided to take a shot at studying Computer Science. My first text book was Bird and Wadler’s Introduction to Functional Programming, taught with one of Haskell’s parent languages, Miranda.
The charitable side of me says that the professors at my University were really passionate about spreading the gospel of FP, and were just a decade or two ahead of their time. The uncharitable side says that they thought CS100 was over-subscribed and wanted to scare as many students as they could off in the first six months.
(3) 1997: Pascal
First Year Comp. Sci. was one semester of Miranda, in which we learned how to write five-line programs to express mathematical formulae and comprehend lists, followed by one semester of Pascal in which we wrote programs that transformed input into output and drew things on the screen. This might explain my next decade-and-a-bit of assuming Functional Programming wasn’t useful for “real world” things.
The final assignment of the year was developing a game of Othello/Reversi. As usual, I left it to the last minute and wrote the whole thing at 3am in the Mac Lab the night before it was due. Decidedly not as usual, I discovered the next day that I had got the submission date wrong, and spent the next week fixing bugs, and adding silly animations and Easter Eggs.
Some time around here I also learned Just Enough C, but never used it for any more than reading other people’s code, and Just Enough Shell Scripting, which is too boring to make the list.
(4) 1998: PHP
I taught myself PHP for the same reason everyone else did. I wanted a webpage where I listed all the CDs I owned and rated them out of five. OK, maybe not exactly the same reason everone else did, but close enough. Still, it convinced someone I was overqualified for the ISP phone support job I was applying for, and thus naive and exploitable, so it must have been good for something.
Some time around here I also learned Just Enough SQL… OK, I knew simple selects, deletes and inserts and had a vague idea how joins worked, but that pretty much describes where I am almost twenty years later, so whatever.
(5) 1999: Perl
One of my greatest Perl creations was a set of scripts for managing an Internet cafe. One half of the program ran as an (unprivileged) CGI script that gave the person at the front of the cafe a view of who had been on which computer for how long, the other half ran as root, listening at a Unix Domain Socket for commands to add or remove firewall rules to take those computers on and offline. I was pretty proud of it at the time.
I wrote this service after I had already vowed to leave my job, so the code was utterly unmaintainable; partly because I was using it as an excuse to learn Object Oriented Perl5, and partly because I was doing things like naming every method after the song I was listening to at the time (oh_my_god_thats_some_funky_stats), or writing functions with five mutable local variables called $binky, $banky, $bunky, $benky, and $bonky.
The joke was on me. A month before I left the job, I had to rewrite all of the provisioning and reporting functionality because they changed their business model.
(6) 1999: VBScript
One day, my boss pulled me into his office and said “Do you know Microsoft Active Server Pages?”
“Can you know them by Wednesday?”
“I guess I can try.”
I was dispatched to the local bookshop to pick up the most plausible-looking “Learn ASP in 24 hours” book, and by the time we met with the client, I could bullshit well enough to get the job.
The job was to rescue a website, the previous developers of which were quite possibly that section of the infinite monkey dimension that didn't get to produce Hamlet, so the bar was luckily set pretty low, and whatever I managed to develop was good enough that I was never called on my making the whole thing up as I went.
(7) 1999: Java
My father told me there was an opening for me in Sydney at his company, but in order to convince his co-founders that I knew my stuff I would probably have to know Java. So that was next on my list. I wrote a massively over- engineered credit-card payment system that I think might at some point maybe have gone into production. That was apparently enough to talk my way through the interview.
I boot my Windows box (less than 24 hours since I last sent it to sleep) because I want to play a game. For the next twenty minutes, I am watching a progress bar tracking an operating system update.
I log into Bitbucket because I want to create a repository for the code I've been working on. I can’t log in because they have migrated me to their central ID platform, and I need to recover a long-forgotten account and merge it.
I start Steam, but I have to wait for the client to upgrade.
I understand that it is important to stay up to date with security patches and bug-fixes. I understand that sometimes, new identity platforms happen. But I know that all of these changes could be scheduled in a way more convenient to me, you just chose not to to do it.
The first priority of any software should be to do what the user is asking it to do. There are very few OS upgrades that can't schedule themselves in the background while I play Overwatch. Steam upgrades don’t change the games I have already downloaded. Your identity management migration can bug me with popups for a while before forcing me to merge my account.
Don’t tell me that your time is worth more than mine.
Douglas Adams, 1978
Ford: They make a big thing of the ship’s cybernetics. “A new generation of Sirius Cybernetics Corporation robots and computers, with the new GPP feature.”
Arthur: GPP? What’s that?
Ford: Er… It says Genuine People Personalities.
Arthur: Sounds ghastly.
F/X: DOOR HUMS OPEN WITH A SORT OF OPTIMISTIC SOUND.
Marvin: It is.
Arthur: W… What?
Marvin: Ghastly. It all is — absolutely ghastly. Just don’t even talk about it. Look at this door. “All the doors on this spacecraft have a cheerful and sunny disposition. It is their pleasure to open for you, and their satisfaction to close again with the knowledge of a job well done!”
F/X: DOOR CLOSES WITH A SATISFIED SIGH
Marvin Hateful, isn't it?
Everybody knows Facebook is creepy. Nonetheless, all this time it never occurred to me to delete my account until it began doing this: Trying to act like a person. Pretending we are on a first-name basis. — Leigh Alexander, The New Intimacy Economy
To get “software with a personality” right, the personality has to be recognisably human. It needs to be the people who made the software shining through their creation, not painting themselves on top of it.
The bigger and more impersonal the software, the more subversive the personality needs to be. It needs to be something a manager would have said no to if they’d known about it before it shipped, not something they figured might make the product play better to Millennials. A spreadsheet that asks if you’ve had a nice day feels like a creepy marketing ploy. A flight simulator Easter Egg is a human being trying to reach you from behind the code.
Happy birthday to me
Happy birthday to me
I've been beating this joke to death since 2002
Fuck fuck fuck fuck fuck fuck.
Leia suspects there's a tracking device on the Millennium Falcon and yet they fly straight to Yavin 4 anyway... — @msharp
Many things in Star Wars don’t make sense, but this one turns out to be pretty straightforward. Leia is badass.
Senator/Princess Leia Organa doesn’t know she is going to be rescued from captivity on the Death Star moments before she is scheduled to be executed, but when it happens, and when the ship she is rescued in is allowed to get away suspiciously easily, she thinks on her feet.
She knows the moons of Yavin have no sentient inhabitants outside the rebel base, and after seeing Alderaan blown up she wants the Death Star as far away from civilian populations as she can get it.
OK, there were two primitive pre-spaceflight species on Yavin 13, but the Empire was unlikely to pay them any notice.
She knows the clock is ticking on the value of the plans she stashed away in R2D2. The Empire knows exactly what was stolen, and it is only a matter of time before they do exactly the same analysis that the Rebellion wants to do, leaving the Rebellion with the embarrassing prospect of showing up to bomb an exhaust port that was already closed for emergency maintenance.
She has seen that Admiral Moff Tarkin is drunk on the power trip he gets from being in charge of a moon-sized death machine. She knows that given the choice between sending a couple of Star Destroyers to take out the Alliance base, and blowing them up personally with his planet-killer, he’s going to choose the Big Round Fucking Laser.
But she knows she doesn’t want to give them too much time to think and maybe come up with a proportionate, sensible response.
So Leia figures either they’ll find something useful in the Death Star plans or they won’t. If they don’t, they’ve got a pretty hairy evacuation in their future and they’ll need to find a new base, but they’ve at least got advance warning the Empire is on its way. If they find something though, this is their best and possibly only chance to get the Death Star to come to where they are tactically strongest, without the rest of the Imperial fleet getting in the way.
And she thinks this through in the time it takes to tell Han what course to plot. Fuck yeah Leia.
Yesterday this account of a serious vulnerability in most major Java application servers crossed my Twitter feed a few times. The description, while thorough, is written in security researcher, so since it’s an important thing for developers to understand, I thought I would rewrite the important bits in developer.
What is the immediate bug?
A custom deserialization method in Apache commons-collections contains reflection logic that can be manipulated to execute arbitrary code. Because of the way Java serialization works, this means that any application that accepts untrusted data to deserialize, and that has commons-collections in its classpath, can be exploited to run arbitrary code.
The immediate fix is to patch commons-collections so that it that does not contain the exploitable code, a process made more difficult by just how many different libraries and applications use how many different versions of commons.
The immediate fix is also utterly insufficient. It’s like finding your first XSS bug in a program that has never cared about XSS before, patching it, and then thinking “Phew, I’m safe.”
So what is the real problem?
The problem, described in the talk the exploit was first raised in — Marshalling Pickles — is that arbitrary object deserialization (or marshalling, or un-pickling, whatever your language calls it) is inherently unsafe, and should never be performed on untrusted data.
This is in no way unique to Java. Any language that allows the “un-pickling” of arbitrary object types can fall victim to this class of vulnerability. For example, the same issue with YAML was used as a vector to exploit Ruby on Rails.
The way this kind of serialization works, the serialization format describes the objects that it contains, and the raw data that needs to be pushed into those objects. Because this happens at read time, before the surrounding program gets a chance to verify these are actually the objects it is looking for, this means that a stream of serialized objects could cause the environment to load any object that is serializable, and populate it with any data that is valid for that object.
This means that if there is any object reachable from your runtime that declares itself serializable and could be fooled into doing something bad by malicious data, then it can be exploited through deserialization. This is a mind-bogglingly enormous amount of potentially vulnerable and mostly un-audited code.
Deserialization vulnerabilities are a class of bug like XSS or SQL Injection. It just takes one careless bit of code to ruin your day, and far too many people writing that code aren’t even aware of the problem. Combine this with the fact that the code being exploited could be hiding inside any of the probably millions of third-party classes in your application, and you’re in for a bad time.
Your best fix is just not to risk it in the first place. Don’t deserialize untrusted data.
The mitigation for this class of vulnerability is to reduce the
surface area available to attack. If only a limited number of objects can be
reached from deserialization, those objects can be carefully audited to make
sure they’re safe, and adding a new random library to your system won’t
unexpectedly make you vulnerable. For example, Python’s YAML implementation
safe_load method that limits object deserialization to
a small set of known objects, essentially reducing it to a JSON-like format.
Your best bet
in Java is not to use Java serialization unless you absolutely trust whoever
is producing the data. If you really want to use serialization, you can
the objects available to be deserialized by overriding the
resolveClass method on
This way you can ensure only objects you have verified are safe will be
populated during deserialization.
Or just don't use serialization for data transfer. Nine times out of ten, tightly coupling your wire format with your object model isn’t something future maintainers of your system are going to thank you for.
Edited November 9 to add the reference to the developerWorks Look-Ahead Deserialization article, after it was pointed out to me by a couple of different people.
My friends on Facebook are generally a tech-literate and cynical bunch, so the ratio of people who fell for the recent spate of “Repost this legalese to regain control of your content” chain-mail hoaxes vs the people who have posted sarcastic reactions to it is about one to twelve.
And that bugs me.
We (the tech industry, but more broadly society) have created these Internet agoras. To members, these sites are vital means of maintaining contact with friends and loved ones, of not feeling left out of important parts of their lives. But the same people will grasp at the most tenuous of straws if it gives them a slight hope that they might claw back some sense of ownership, safety and control.
Every time a social media site changes its defaults, loosens its privacy settings or tightens its licensing, we tend to take lack of action by its members as tacit acceptance that privacy and ownership just don't matter. Hoaxes like this tell us otherwise. People feel trapped and helpless in a complex, baffling system. They want a way to assert control over their online lives, and they don't understand why it's not as simple and obvious as saying “I wrote this. I took these photos. They are mine.”
- Write first draft
- Find a dozen things wrong with published post, frantically fix them before too many people read the article.
- GOTO 3
Number of post-publication edits for this post: 4
Remember back in 2003 when blogging was going to take over the world? When we were writing odes to blogging, building popular tools to map the blogsphere, actually using the word blogosphere with a mostly straight face, and wringing our hands over every new entrant in the field and every Google index update?
Sure, the component parts of blogging are everywhere now. The Internet is drowning in self-publishing, link-sharing, articles scrolling by in reverse-chronological order. It's no coincidence that the most popular CMS on the public Internet, by a pretty ridiculous margin is a blogging platform.
But somewhere around a decade ago, the soul of blogging died. The heterogeneous community using syndication technologies to create collaboratively-filtered networks of trust and attention between personally-curated websites, forming spontaneous micro-communities in the negative space between them? That’s the thing we were all saying would take over the world, and instead “blogging” dwindled back to being a feature of corporate websites, a format for online journalism, and a hobby of techies who like running their own web pages.
Going back over fourteen years of my own blog history was an interesting lesson in how this blog changed over the years. There are entire classes of post that filled the pages of this site in 2002, but that were not to be seen five years later. Some of this was due to me changing behind the blog. Many were due to the Internet changing around it.
So what happened to blogging?
Digg stole its community.
And then reddit and Hacker News, but Digg did it first.
Kuro5hin demanded users share substantial things they wrote themselves, everything else was “Mindless Link Propagation”. Digg took MLP and changed the shape of the Internet with it.
In doing so, Digg created a devoted platform for one of the core activities, and most common entry-points of blogging: holding conversations about things written elsewhere. Their platform was far easier to get involved in, far easier to set up, and solved that one big question of blogging newbies: “How do I get anyone to even read what I’m writing?” with centralisation and gamification.
Bloggers didn't jump ship for Digg, but equally Digg didn't contribute to blogging. Visitors from aggregation sites notoriously never looked deeper into the sites they were visiting than the single article that was linked, and the burst of syndication subscribers a blogger would normally get if one of the hubs of their community linked to them just never came from aggregation sites.
Bloggers did, however, find themselves having to take part in these communities. At first because more often than not aggregators were where the conversation was happening about the things they were writing, and writing about. Later, because they’re where readers come from. For many people trying to make money writing on the Internet today, links from reddit are how you survive.
For their part, aggregation site users tend to hold bloggers in the lowest of low esteem, even when linking to them. Blogging is narcissistic. Who are they to remain aloof from the community like that, to share links and posts on their own website instead of contributing them to the centralised collective?
It is this sense of community that even turned some aggregators into creators, beyond the surfacing of links or crowdsourced comments about them. Like “Ask Slashdot” before it, some of the most popular communities on reddit are built around user-contributed posts. Overall, though, links still rule the site.
Users of aggregators tend to reserve their greatest vitriol for sites that aggregate or republish things from their website, whether it be something that was original to the site, or even if it’s just a link they found “first”. For sites built around monetising other sites’ labour, aggregator users get mighty tetchy when the same thing is done to them.
Twitter stole its small-talk.
Bloggers might not have jumped ship for aggregators, but they dove into Twitter head first.
It takes a lot of time and inspiration to write a long-form article, so most blogs filled the gaps between with links, funny pictures they had found around the Internet, short pithy commentary, snippets of conversation, interesting quotes, jokes, and in one case from a blogger now worth more money than you can count, an enthusiastic two sentence review of the porn site “Bang Bus”.
With Twitter you could do that on your phone, have it pushed to your friends/subscribers in real time, and have the same done back to you with equal ease. It wasn't even a competition.
Twitter still has the “How do I get people to notice me?” problem, and later developed the even more disturbing “How do I get people to stop noticing me?” problem, but that didn't stop it sucking the remaining air out of the blogosphere in the course of surprisingly few months.
What about Facebook, Instagram, Pinterest and the like? Well, from my perspective they weren't so much the successors to blogging as they were the successors to Livejournal.
Tumblr stole its future.
A curmudgeon might say I should also file Tumblr under “successors to Livejournal”, but I disagree. Tumblr sites tend far less towards being amorphous personal diaries aimed square at the author’s existing social network, and far more towards expressing the author’s interests in public, and joining the larger community that arises around them.
From one perspective, Tumblr is blogging. At today’s count they host 244 million blogs making a total of 81 million posts per day. That’s about four posts per year for every human being on Earth. Users can contribute their own posts, but just as importantly they can reblog and comment, forming spontaneous, distributed communities of interest around (and in the spaces between) the things they share from others.
From another perspective, Tumblr stole blogging. The syndication and sharing tools, the communities built within Tumblr, everything stops dead at the website's border. The tools seem almost contemptuous of the web as it exists outside Tumblr. To quote JWZ:
[Tumblr pioneered] showing the entire thread of attributions by default, and emphasizing the first and last -- but stopping cold at the walls of the Tumblr garden. To link to an actual creator, you have to take an extra step, so nobody bothers.
These may seem like small glitches, but the aggregate effect is huge. They’re what makes the “Tumblr Community” a real thing people talk about in a way you'd never hear about, say, people who happen to host their sites with Wordpress.
Centralisation and lock-in won.
In the end, the distributed, do-it-yourself web was just too hard. Not just for newcomers facing a mountainous barrier to entry, but even to incumbents looking to shave a few sources of frustration from their day. Just ask anyone who excitedly built RSS/Atom syndication into their product in the early 2000s, only to deprecated the feature gradually into the power-user margin over the ensuing decade.
In every case, a closed, proprietary system took some ingredient of the self-publishing crack bloggers discovered in the early 2000s and distilled it into a product that was easier to use, and that people were willing to adopt even though it meant losing the freedom of openness, interoperability and owning your own words.
Leaving behind a landscape of those for whom that sacrifice either was not commercially attractive, or those of us who are just sufficiently set in our ways that the idea of not running our own website feels alien.
Ask me ten years ago, and I'd say a blog entry, once published, should remain that way. Oh wait, I actually did say that:
I try never to delete anything substantive. Attempting to un-say something by deleting it is really just a case of hiding the evidence. I'd much rather correct myself out in the open than pretend I was never wrong in the first place.
The reasons not to delete come down to:
- Not wanting to break the web by 404-ing a page
- Wanting to be honest about what you’ve said in public
- Keeping a record of who you were at some moment in time.
The counter-arguments are:
- The web was designed to break. And anyway, the stuff worth deleting is usually the stuff nobody’s linking to.
- Just how long does a mea culpa have to stand before it becomes self-indulgent?
- Unless you’re noteworthy and dead, or celebrity and alive, the audience for your years-old personal diaries is particularly limited.
- Publishing on the web isn’t just something you do, and then have done. It’s an ongoing process. A website isn’t just a collection of pages, it’s a work that is both always complete, and always evolving. And every work can do with the occasional read-through with red pen in hand.
That last point is the most compelling one. I was publishing a website full of things that, however apt they were at the time to the audience they were published for, just aren’t worth reading today.
So to cut a long story short, last weekend I un-published about 700 of the previously 1800 posts on this blog; things that were no longer correct, things that were no longer relevant, things that were no longer interesting even as moments in time, and things that I no longer feel comfortable being associated with. I don't think anything that was removed will be particularly missed, and as a whole the blog is a better experience for readers without them.
The weirdest thing about deleting 700 blog posts is realising you had 1800 to start with. Although to be fair, 1750 of them were Cure lyrics drunk-posted to Livejournal.
Under the hood
It's a testament to the resilience of Moveable Type that in the eleven years since I first installed it to run this blog, I've upgraded it exactly twice. If I’d tried that with the competition, I doubt I’d have had nearly as smooth a ride.
Moveable Type got me through multiple front-page appearances on Digg, reddit, Hacker News and Daring Fireball without a hitch, or at least would have if I hadn't turned out to be woefully incompetent at configuring Apache for the simple task of serving static files.
But as they say, all good things must come to end. Preferably with Q showing up in a time travel episode.
I replaced Moveable Type with a couple of scripts that publish a static site from a git repo, fully aware that I’m doing this at least five years after it became trendy. The site should look mostly identical, except comments and trackbacks haven't been migrated. They’re in the repo, but I'm inclined to let them stay there.