The Fishbowltag:fishbowl.pastiche.org,2008-03-22://12023-01-08T09:04:38.102ZCharles Miller's weblog, covering software development, online culture, and whatever assorted nerdery happens to cross my path.Bite-size thoughts about AItag:fishbowl.pastiche.org,2023://1.10002023-01-08T09:04:38.102Z2023-01-08T09:04:38.102ZBy law the Quest for Ultimate Truth is quite clearly the inalienable prerogative of your working thinkers. Any bloody machine goes and actually finds it and we're straight out of a job aren't we?Charles Millerhttp://fishbowl.pastiche.org
<blockquote>
<p><strong>Majikthise:</strong> By law the Quest for Ultimate Truth is quite clearly the inalienable prerogative of your working thinkers. Any bloody machine goes and actually finds it and we're straight out of a job aren't we? I mean what's the use of our sitting up all night saying there may…</p>
<p><strong>Vroomfondel:</strong> Or may not be…</p>
<p><strong>Majikthise:</strong> …or may not be, a God if this machine comes along next morning and gives you His telephone number?</p>
<p><strong>Vroomfondel:</strong> We demand guaranteed rigidly defined areas of doubt and uncertainty.</p>
<p>– Douglas Adams. <em>The Hitchhiker's Guide to the Galaxy, Fit the Fourth.</em></p>
</blockquote>
<p>I claim no special knowledge or insight into the field of Artificial Intelligence (AI) or AI ethics. There are people out there doing real work in the field with far more informed and deeply-thought-through views than I.</p>
<p>I'm writing this because I'm interested in what I think about the subject, and writing it down helps me work out what that is. If anyone else is interested particularly in what Charles thinks, good for you.</p>
<p>You can't uninvent something. Once a thing exists, it's important to think how best we can fit it into the world.</p>
<h2>Bias</h2>
<p>It's trivial to demonstrate that the models behind current generation AI reflect, starkly, the biases and discrimination that pervade the world that trained it. The only solution we've got so far is to ask the AI not to show it too obviously (see <strong>Security</strong> below). That doesn't remove the bias from the model, just ensures it will express itself in less overt ways, like that relative who knows not to say <em>those</em> things about foreigners at the family dinner.</p>
<p>We're obsessed with making AI that's smarter than us, but if we don't make it <em>better</em> than us, it's going to keep algorithm-washing the worst things about us.</p>
<h2>Lies</h2>
<p>A researcher asking ChatGPT, today's trending Large Language Model (LLM), to cite its sources found it was happy to do so, by outputting plausible names and URLs for articles that never existed. A librarian reported a customer bringing in a list of books, suggested by GPT for research, that never existed.</p>
<p>For a lark I asked ChatGPT to write a newspaper article about Australian Prime Minister Anthony Albanese attending a Pixies concert at the Sydney Opera House. This is a real event, but every detail described in the generated article, down to confected quotes from crowd-members, was invented by a robot.</p>
<p>Even "invented" is poor terminology. Machine Learning (ML) models don't invent or create. They put words next to other words, or place pixels next to other pixels, that statistically might follow whatever they were prompted with.</p>
<p>Like a face appearing in patterns of bark on a tree, we mistake that process for something human.</p>
<p>An LLM can't distinguish truth from lies because it doesn't know what lies are. It doesn't "know" anything. It is the sum of the statistical connections of phenomena in its training set. It can't be taught what lying is because it possesses no intent. It doesn't contain any capacity for understanding, and there is no standard for an untruth that exists in a corpus of training data disconnected from the world that produced it.</p>
<p>And you know what? If I was writing an article about a Pixies concert with a pressing deadline, I'd be tempted to leave those quotes in. They're not attributed to anyone who might get upset, and they sound like things people <em>would</em> have said.</p>
<p>To make a good demo, we built an incredibly efficient lying machine and set it loose on the world.</p>
<h2>Turing</h2>
<p>Turing proposed his <em>Imitation Game</em> in 1950. Unable to define 'intelligence' usefully enough to measure artificial intelligence, he proposed a proxy: the ability to convincingly hold a conversation.</p>
<p>We have comprehensively proved the Turing Test is outdated and no longer useful. Chatbots are plausible enough to fool people, but demonstrably not intelligent. At the same time, the deeper truth of Turing that people will <em>believe</em> something is intelligent if it can convincingly mimic human conversation has been proven.</p>
<h2>The Zombie Problem</h2>
<p>A chess bot doesn't calculate its moves the way humans do, but it is still fundamentally playing chess. Why isn't a facsimile of conversation the same as conversation? A facsimile of intelligence the same as intelligence?</p>
<p>Chess has a clearly defined victory condition. So bootstrapped, AlphaZero could play itself and iterate on that experience. There is no "win condition" for general intelligence that we yet know how to encode in a model. Midjourney can't teach itself what hands look like unless we give it success criteria that already understand what hands are.</p>
<h2>Reinforcement</h2>
<p>We're reaching a turning-point where ML tools are becoming common enough that the next generation of models are trained, significantly, on the output of the previous generation. This trend will only increase as people use learning models to produce graphics or text without necessarily labeling it as such.</p>
<p>This can be positive reinforcement (people will keep more of the 'good' output than the 'bad'), but errors and glitches generated by the models will also be re-ingested, reproduced and re-ingested again. So look to AI "learning" increasingly weird (or bad; see <strong>Bias</strong>) things over time.</p>
<p>A friend on Twitter compared this to how we have to make certain radiation-monitoring devices from scavenged pre-nuclear-era metals.</p>
<h2>Stagnation</h2>
<p>How will the AI learn to write about new things, when the new things are being written by an AI?</p>
<h2>Security</h2>
<p>We've come a long way since Tay, the Microsoft chatbot that was taken offline after trolls trained it to spew racism, and at the same time, not very far. This generation of machine-learning image-generators and chatbots are released with safeguards that theoretically prevent them from saying things or emitting images that might reflect badly on their creators.</p>
<p>The safeguards don't erase the capacity to generate those images or say those things from the tool's underlying model, and often the safeguards to prevent them coming out are also built on top of that model either as extra input that primes the model not to do the bad thing, or a filter that tries to detect the bad thing in the output before it reaches a human.</p>
<p>Frequently you can bypass these constraints just by telling the model, politely, not to apply them. The Laws of Robotics this is not.</p>
<p>This raises gnarly questions of how you secure a system that is a black box, and that you can only prevent it doing the things you don't want it to do by asking politely, and hoping the instruction sticks through adversarial inputs.</p>
<h2>Copyright</h2>
<p>Engaging on the level of how our current understanding of intellectual property intersects with the training and use of machine learning models can often get mired in technicalities and miss the point.</p>
<p>Even if these models did learn the way humans do, and what they produce is analogous to a human using their experience to create something new (neither is true), the fact that the product of this learning and its capacity to create can be owned, bought and sold, in perpetuity by commercial enterprises makes it entirely different to human creativity.</p>
<p>My opinion is that <em>the models themselves</em> should be considered derivative works of all the data that went into training them, and that training a model should require the <em>specific affirmative consent</em> of the creators of that data (i.e. you can't just upload your code to github and find out years later that some obscure part of the ToS allowed them to train a code-writing bot with it).</p>
<p>Sure, this makes life harder for people building big general-purpose models. But they're usually either big companies, or small companies funded by big Venture Capital firms, looking to profit from vacuuming up public and private data (popular text and graphics AIs ChatGPT and Midjourney, for example, are both owned by a VC-backed startup currently valued at $10 billion dollars). So make the bastards pay for it.</p>
<h2>Overpromising</h2>
<p>When a technology is in the early breakthrough stage, you can draw an ever-rising line through the innovation and see a point on the horizon where "magic trick" turns into "actually magic". This point is very rarely reached. We hit the point where the problem gets <em>really hard</em> again, and the curve flattens.</p>
<p>This is why fully-autonomous cars feel less plausible today than they did a decade ago. Or why the learning model that beat the world's best at Chess and Go could only manage "kind of good" at Starcraft.</p>
<p>When thinking of the future of a technology it's good to separate the plausible, that it will get better at things it's demonstrated it is good at, from the still speculative, that it will develop new capabilities it has not yet shown.</p>
<p>A cadre of fabulists imagine a future where the exponential growth of artificial intelligence creates an omnipotent, omniscient AI. When you look closer, most of it is a combination of nerd wish-fulfilment and libertarian fantasy.</p>
<p>A far more likely future is the one where even if general self-reinforcing AI is possible, all we discover are the depressing limits of what we call intelligence.</p>
<h2>Work</h2>
<p>Machine Learning is already a direct threat to artists, especially those who make a modest income doing commissions. Even if we solve the problem of obtaining consent from and compensating those whose works trained the models (See: <strong>Copyright</strong>), this isn't going to change.</p>
<p>Text models are starting to be able to imitate things we currently pay people to do, from explaining complex subjects to to writing software. They're often wrong (see: <strong>Lies</strong>), but they are plausible and engaging. They will get better, probably more so at the latter than the former (See: <strong>Overpromising</strong>).</p>
<p>If you're involved in creative work, it seems very likely that even with just incremental improvements to the existing technology, the cheapest way to reach some result (and we're generally not paid to do it the expensive way) will increasingly be to seek the assistance of an AI.</p>
<p>What does software development look like if you can type "Write a program that does [thing]" into a textbox, for some very wide range of values for [thing], and your job is ensuring it does that thing properly? Do we finally stop inventing new programming languages and frameworks because getting enough humans to write enough beginner-level code to seed the model isn't a productive use of our time?</p>
<h2>Spam</h2>
<p>The public Internet is already full of cheaply written text devised to squat search engine terms and monetize clicks. Now the text can be created for cents per page by a computer, instead of cents per word by a human.</p>
<p>User generated content sites like Wikipedia and Stack Overflow are already having to fight the appeal of easy to generate, low-effort, plausibly inaccurate "user-generated" content.</p>
<p>Expect the same to happen to image search results, but at least that will be an improvement on all those registration-walled links to Pinterest.</p>
<h2>Endnote</h2>
<p>This post was written without the assistance of ML. In retrospect, I squandered a great opportunity to be meta.</p>
Usenet Spam: a Slice of Historytag:fishbowl.pastiche.org,2021://1.10002021-01-11T21:21:23.764Z2021-01-11T21:21:23.764ZSpam on Usenet was a big deal beacuse of the way Usenet worked. It was a broadcast protocol where every message posted to Usenet got copied to every server that carried the newsgroup it was posted to.Charles Millerhttp://fishbowl.pastiche.org
<p class="aside">This is an anecdotal account written decades after the fact by someone who was a teenager at the time it happened, and was only involved as an observer. There's no real point to the story beyond recording a noodly fragment of obscure Internet history.</p>
<p>The “modern era” of spam is generally accepted as having begun in 1994, when the <a href="https://www.wired.com/1999/04/the-spam-that-started-it-all/">law-firm of Canter and Seigel</a> sent a message to 5,500 newsgroups advertising their Green Card lottery services. They said the advertisements netted them $100,000 in business for an outlay of “only pennies”, although this claim came on top of them launching a spam-for-hire business so take it with a grain of salt.</p>
<p>Spam on Usenet was a big deal beacuse of the way Usenet worked. It was a broadcast protocol where every message posted to Usenet got copied to every server that carried the newsgroup it was posted to. At least in theory. In practice, the system was highly unreliable and you just got used to replying to some post you never saw, except that someone’s reply to it made it as far as your server.</p>
<p>The cost of sending a Usenet message was tiny for you and the one server you were connected to, but the cost of storage and processing multiplied by every server in the network, plus the bandwidth costs of ferrying your message betweeen all these servers, was significant.</p>
<blockquote>
<p>This program posts news to thousands of machines throughout the entire civilized world. Your message will cost the net hundreds if not thousands of dollars to send everywhere. Please be sure you know what you are doing. — <a href="https://retrocomputing.stackexchange.com/questions/14763/what-warning-was-given-on-attempting-to-post-to-usenet-circa-1990">Early Usenet newsreader software</a></p>
</blockquote>
<p>Message volume was also a constant problem for Usenet server administrators. Raw disk-space concerns aside, Usenet servers would, by default, store each message as a separate file, and the twin demons of random-access seeking millions of tiny files, and each message consuming a filesystem inode, caused admins headaches even dealing with the network's organic growth. (INN, the standard Unix Usenet server software, would eventually implement <a href="https://dl.acm.org/doi/10.5555/1037150.1037169">its own custom filesystem as a circular buffer</a>.)</p>
<p>This made Usenet spam a kind of DDOS attack. It cost the originator very little to post, but the protocols of Usenet itself could multiply that cost until it threatened to break the network as a whole. The Canter and Seigel Green Card spam was in itself merely annoying, but the risk of unscrupulous commercial operators habitually dumping massive amounts of data into a fragile network threatened the entire medium.</p>
<p>Which, of course, is what happened next.</p>
<blockquote>
<p>Do you know what it feels like to know that your news server, despite the fact that it's some of the best hardware you can get with your available resources for an application that most people just don't care about, is running a backlog? That you're dropping incoming articles? That somewhere, somewhere there are things being posted which you are not receiving? They could be junk, they could be beautiful, well-expressed pieces of someone's soul, and you DON'T KNOW, you CAN'T KNOW, because legions of fucking vandals are throwing so much CRAP at your news server that it's running flat out trying to process it and delete it and just can't go any faster? — <a href="https://www.eyrie.org/~eagle/writing/rant.html">Russ Allbery, A Rant About Usenet</a></p>
</blockquote>
<p>As a distributed, decentralised network, tools available to admins to stop abuse were limited. They could play whack-a-mole against spam accounts on their own servers, and band together to exclude servers with lax enforcement from the network as a whole, but very quickly the first line of defense became the 'cancel' message.</p>
<p>A Usenet <a href="https://tools.ietf.org/html/rfc1036#section-2.2.6">control message</a> was a specially formatted Usenet post that would be interpreted by the servers as a command. If you posted something and regretted it, you could send a 'cancel' control message, instructing the network to delete your post locally and no longer propagate it to other servers. Cancel messages were only valid if they were posted by the same account as the message being deleted, but since forging a Usenet message to look like it was from someone else was trivial, practically anyone could post one.</p>
<p>For a large chunk of Usenet history, when getting in trouble on Usenet meant having an embarrassing conversation with your CS professor about why you shouldn't lose network access, it was assumed if you were technical enough to know how to send a cancel message, you could be trusted not to abuse it. So even well into the <a href="https://en.wikipedia.org/wiki/Eternal_September">Eternal September</a>, most servers just accepted them without question.</p>
<p>Fast-forward to a year or so after the Green Card spam, a group of essentially vigilante Usenet admins were coordinating with each other to detect spam messages, and generate a cancel message for each one.</p>
<p>Mass cancellation of Usenet spam was controversial. Partly because cancel messages became a non-trivial component of Usenet traffic, and brought with them the same technical issues inherent in processing, storing and forwarding large numbers of tiny messages, partly because they infringed on the "free speech" rights of spammers, and partly because <a href="https://tools.ietf.org/html/rfc1036">RFC 1036</a> said forging cancel messages wasn't allowed.</p>
<p class="aside">To which the correct reply is "Well actually, RFC 1036 never made it past 'proposed standard', please see <a href="http://kirste.userpage.fu-berlin.de/outerspace/netnews/son-of-1036.html#7.1">son of RFC 1036</a>."</p>
<p>The free speech question, particularly, became a hot topic of debate on newsgroups like the 'news.admin.net-abuse.*' heirarchy that were set up specifically to discuss spam and coordinate its removal. Free speech maximalists would argue that nobody had the authority to unilaterally remove messages from Usenet, admins would side-step the issue by arguing that they weren't censoring the content of messages, they were preventing the denial of service attack caused by them being posted over and over again.</p>
<p class="aside">The impact of anti-spam efforts on the arguments of free speech maximalists continues to be felt today. The impressive effectiveness of, say, GMail's spam filter, is a bit of an embarrassment to anyone claiming that removing particular classes of content from online services is just too hard to even attempt.</p>
<p>Like all Usenet arguments it quickly became impossible to tell who was seriously debating and who was just there to throw gasoline on the fire. Some wag suggested that anti-spam efforts were backed by a “<a href="https://en.wikipedia.org/wiki/Lumber_Cartel">Lumber Cartel</a>” trying to protect its profits from the threat of junk mail becoming electronic, and thus reducing demand for paper.</p>
<p>At the same time, another vigilante group, not quite high up enough in the nerd hierarchy to take part in the high level policy discussions but still wanting to be involved, formed a parallel vigilante group around the activity of complaining about spammers to their Internet Service Provider and getting them kicked off the Internet.</p>
<p>Again, like all activities on Usenet, this quickly became a role-playing game. Participants adopted <a href="https://memegenerator.net/instance/71959350/the-crow-a-whole-jolly-club-with-jolly-pirate-nicknames">jolly pirate nicknames</a> and set out to defeat evil, gleefully celebrating each “kill” and competing over who could be the most effective wielder of the holy ban hammer.</p>
<p>In the end, all this effort only served to prolong the inevitable. Usenet predated the Internet, its architecture forged in an era where only a few lucky hosts were permanently connected to the network, and the rest would sneakily dial up in the early hours when long-distance charges were low and slurp down the day's messages. Even by the mid 90s it was clear that Usenet's mass duplication and decentralisation was not only unnecessary on a fully-connected Internet, but a crippling overhead to the service as a whole.</p>
<p>I'm sure people are going to take exception to me referring to Usenet in the past tense for this whole article, but now, that's essentially where it lives.</p>
Political Discourse in the Early 21st Centurytag:fishbowl.pastiche.org,2020://1.10002020-02-09T22:04:20.075Z2020-02-09T22:04:20.075ZWhat happens when you debate what you think someone else believes, not what you believe.Charles Millerhttp://fishbowl.pastiche.org
<div style="text-align: center; font-family: monospace; margin-left: 18%; margin-right: 18%; font-size: larger">
<p><strong>A Conservative:</strong></p>
<p>I oppose accepting refugees because I think we should severely restrict people of certain ethnicities/religions entering the country, but this argument is never going to run with liberals.</p>
<p><strong>A Liberal:</strong></p>
<p>I believe we have a moral responsibility to accept refugees because we are a wealthy country, and they are fleeing great hardship, some of which we played a part in causing, but this argument is never going to run with conservatives.</p>
<p><strong>Conservative:</strong></p>
<p>I know! What about Breakfast at Tiffany’s?</p>
<p><strong>Liberal</strong></p>
<p>I think I remember the film, and as I recall, I think we both kind of liked it.</p>
<p>Oh, you mean what if we find some kind of common ground to argue instead?</p>
<p><strong>Conservative:</strong></p>
<p>Right.</p>
<p><strong>Liberal</strong></p>
<p>OK, well you're a conservative. That means you believe in limited government and low taxes and stuff. Here's a bunch of articles that show our current refugee policies cost more than it would just to let them in and give them a place to live while we determined their status.</p>
<p><strong>Conservative:</strong></p>
<p>I don't believe you. Here are a bunch of counter-articles disputing that claim.</p>
<p><strong>Liberal</strong></p>
<p>Dude, half of those are from Breitbart but I guess… here's a bunch more articles refuting the articles you sent me.</p>
<p><strong>Conservative:</strong></p>
<p>I still don’t believe you. Convince me some more.</p>
<p><strong>Liberal</strong></p>
<p>What standard of proof would you accept?</p>
<p><strong>Conservative:</strong></p>
<p>Honestly? I'm not reading anything you’re sending me, I am just Googling the titles and finding pages on right-wing sites that disagree with them, because I don’t care enough to engage your argument seriously.</p>
<p><strong>Liberal</strong></p>
<p>Isn't that, like, a little hypocritical? Do you suddenly not care about economics?</p>
<p><strong>Conservative:</strong></p>
<p>If you found out that accepting refugees cost more than rejecting them, would you change <i>your</i> mind?</p>
<p><strong>Liberal</strong></p>
<p>No but that’s not the… I guess it is.</p>
<p><strong>Conservative:</strong></p>
<p>Anyway, we have to be tough on refugees because if we aren’t, how many more will be exploited by people smugglers and die trying to get to our country? Those deaths will be on your hands.</p>
<p><strong>Liberal</strong></p>
<p>You don't believe that either, do you.</p>
<p><strong>Conservative:</strong></p>
<p>No, but if you don’t waste your time arguing my point you lose the moral high ground, and I can talk endlessly about how you are only pretending to care about refugees so you can virtue-signal your fellow latte-sipping socialists.</p>
<p><strong>Liberal</strong></p>
<p>And meanwhile we both get to avoid uncomfortable discussions about what we really think.</p>
</div>
#FirstSevenLanguages (Director’s Commentary)tag:fishbowl.pastiche.org,2016://1.10022016-08-15T04:52:37.070Z2016-08-15T04:52:37.070ZOne day, my boss pulled me into his office and said “Do you know Microsoft Active Server Pages?”Charles Millerhttp://fishbowl.pastiche.org
<p class="aside">This is the Director’s Commentary Track for a
<a href="https://twitter.com/carlfish/status/765027471800594432">Twitter hashtag reply</a>.</p>
<h4>(1) 1984: BASIC (C64, <a href="https://en.wikipedia.org/wiki/STOS_BASIC">STOS</a>,
<a href="https://en.wikipedia.org/wiki/AMOS_(programming_language)">AMOS</a>)</h4>
<p>As the 80s progressed, the program my brother and I would type into Commodore 64s in department stores when
nobody was looking got longer and more complicated.<p>
<p>I vaguely remember the ultimate version asked you for your name, then asked you if you were an idiot.
If you entered "Y" it would print out "(name) is an idiot!!!" a few hundred times. If you said "N"
it would print out "(name) is a liar!!!"</p>
<p>Then it would clear the screen and return to the first prompt. GOTO 10.</p>
<h4>(2) 1997: Miranda</h4>
<p>After dropping out of Law and spending a year “finding myself”, I decided to take a shot at studying
Computer Science. My first text book was
<a href="http://usi-pl.github.io/lc/sp-2015/doc/Bird_Wadler.%20Introduction%20to%20Functional%20Programming.1ed.pdf">Bird
and Wadler’s <i>Introduction to Functional Programming</i></a>, taught with one of Haskell’s parent languages,
Miranda.</p>
<p>The charitable side of me says that the professors at my University were really passionate about spreading the
gospel of FP, and were just a
decade or two ahead of their time. The uncharitable side says that they thought CS100 was over-subscribed and wanted
to scare as many students as they could off in the first six months.</p>
<h4>(3) 1997: Pascal</h4>
<p>First Year Comp. Sci. was one semester of Miranda, in which we learned how to write five-line programs to express
mathematical formulae and comprehend lists, followed by one semester of Pascal in which we wrote programs that
transformed input into output and drew things on the screen. This might explain my next decade-and-a-bit of
assuming Functional Programming wasn’t useful for “real world” things.</p>
<p>The final assignment of the year was developing a game of <i>Othello/Reversi</i>. As usual, I left it to the last minute
and wrote the whole thing at 3am in the Mac Lab the night before it was due. Decidedly not as usual, I discovered
the next day that I had got the submission date wrong, and spent the next week fixing bugs, and adding silly
animations and Easter Eggs.</p>
<p>Some time around here I also learned Just Enough C, but never used it for any more than reading other people’s
code, and Just Enough Shell Scripting, which is too boring to make the list.</p>
<h4>(4) 1998: PHP</h4>
<p>I taught myself PHP for the same reason everyone else did. I wanted a webpage where I listed all the CDs I owned
and rated them out of five. OK, maybe not exactly the same reason everone else did, but close enough. Still, it
convinced someone I was overqualified for the ISP phone support job I was applying for, and thus
naive and exploitable, so it must have been good for something.</p>
<p>Some time around here I also learned Just Enough SQL… OK, I knew simple selects, deletes and inserts and had a vague
idea how joins worked, but that pretty much describes where I am almost twenty years later, so whatever.</p>
<h4>(5) 1999: Perl</h4>
<p>One of my greatest Perl creations was a set of scripts for managing an Internet cafe. One half of the program ran
as an (unprivileged) CGI script that gave the person at the front of the cafe a view of who had been on which
computer for how long, the other half ran as root, listening at a Unix Domain Socket for commands to add or remove
firewall rules to take those computers on and offline. I was pretty proud of it at the time.</p>
<p>I wrote this service after I had already vowed to leave my job, so the code was utterly unmaintainable; partly
because I was using it as an excuse to learn Object Oriented Perl5, and partly because I was doing things like naming
every method after the song I was listening to at the time (<tt>oh_my_god_thats_some_funky_stats</tt>), or writing
functions with five mutable local variables called <tt>$binky</tt>, <tt>$banky</tt>, <tt>$bunky</tt>, <tt>$benky</tt>,
and <tt>$bonky</tt>.</p>
<p>The joke was on me. A month before I left the job, I had to rewrite all of the provisioning and reporting
functionality because they changed their business model.</p>
<h4>(6) 1999: VBScript</h4>
<p>One day, my boss pulled me into his office and said “Do you know Microsoft Active Server Pages?”</p>
<p>“No…”</p>
<p>“Can you know them by Wednesday?”</p>
<p>“I guess I can try.”</p>
<p>I was dispatched to the local bookshop to pick up the most plausible-looking “Learn ASP in 24 hours” book, and by
the time we met with the client, I could bullshit well enough to get the job.</p>
<p>The job was to rescue a website, the previous developers of which were quite possibly that section of the
infinite monkey dimension that didn't get to produce Hamlet, so the bar was luckily set pretty low, and
whatever I managed to develop was good enough that I was never called on my making the whole thing up as
I went.</p>
<p>Some time around here I also learned Just Enough Javascript, but it took until 2002 for me to work out it was
a real language.</p>
<h4>(7) 1999: Java</h4>
<p>My father told me there was an opening for me in Sydney at his company, but in order to convince his co-founders
that I knew my stuff I would probably have to know Java. So that was next on my list. I wrote a massively over-
engineered credit-card payment system that I think might at some point maybe have gone into production. That was
apparently enough to talk my way through the interview.</p>
I don’t care. It’s not my problem.tag:fishbowl.pastiche.org,2016://1.10012016-08-04T05:12:23.317Z2016-08-04T05:12:23.317ZThe first priority of any software should be to do what the user is asking it to do. Don’t tell me that your time is worth more than mine.Charles Millerhttp://fishbowl.pastiche.org
<p>I boot my Windows box (less than 24 hours since I last sent it to sleep) because I want to play a game.
For the next twenty minutes, I am watching a progress bar tracking an operating system update.</p>
<p>I log into Bitbucket because I want to create a repository for the code I've been working on. I can’t log in because
they have migrated me to their central ID platform, and I need to recover a long-forgotten account and merge
it.</p>
<p>I start Steam, but I have to wait for the client to upgrade.</p>
<p>I understand that it is important to stay up to date with security patches and bug-fixes. I understand that
sometimes, new identity platforms happen. But I know that all of these changes
could be scheduled in a way more convenient to me, you just chose not to to do it.<p>
<p>The first priority of any software should be to <i>do what the user is asking it to do</i>. There are very few
OS upgrades that can't schedule themselves in the background while I play Overwatch. Steam upgrades don’t change
the games I have already downloaded. Your identity management migration can bug me with popups for a while
before forcing me to merge my account.<p>
<p>Don’t tell me that your time is worth more than mine.</p>
Genuine People Personalitiestag:fishbowl.pastiche.org,2016://1.10002016-01-10T23:02:20.075Z2016-01-10T23:02:20.075ZAll the doors in this spacecraft have a cheerful and sunny disposition. It is their pleasure to open for you, and their satisfaction to close again with the knowledge of a job well done!Charles Millerhttp://fishbowl.pastiche.org
<h2>Douglas Adams, 1978</b></h2>
<p><b>Ford:</b> They make a big thing of the ship’s cybernetics. “A new generation of Sirius Cybernetics Corporation robots and computers, with the new GPP feature.”</p>
<p><b>Arthur:</b> GPP? What’s that?</p>
<p><b>Ford:</b> Er… It says Genuine People Personalities.</p>
<img src="/2016/01/11/gpp/plastic-pal.jpg" alt="" width="500" height="391" title="Your Plastic Pal Who’s Fun to Be With!" style="margin: auto">
<p><b>Arthur:</b> Sounds ghastly.</p>
<p><b>F/X: DOOR HUMS OPEN WITH A SORT OF OPTIMISTIC SOUND.</b></p>
<p><b>Marvin:</b> It is.</p>
<p><b>Arthur:</b> W… What?</p>
<p><b>Marvin:</b> Ghastly. It all is — absolutely ghastly. Just don’t even talk about it. Look at this door. “All the doors on this spacecraft have a cheerful and sunny disposition. It is their pleasure to open for you, and their satisfaction to close again with the knowledge of a job well done!”</p>
<p><b>F/X: DOOR CLOSES WITH A SATISFIED SIGH</b></p>
<p><b>Marvin</b> Hateful, isn't it?</p>
<h2>Facebook, 2015</h2>
<img src="/2016/01/11/gpp/facebook-hello.png" alt="Facebook now inserts a jaunty “Good afternoon, Charles!” in my timeline." title="Good afternoon, Charles!" width="504" height="57" style="margin: auto">
<blockquote><p>Everybody knows Facebook is creepy. Nonetheless, all this time it never occurred to me to delete my account until it began doing this: Trying to act like a person. Pretending we are on a first-name basis. — <a href="https://medium.com/thoughts-on-media/the-new-intimacy-economy-51c87dc4a4d6#.70z49cjua">Leigh Alexander, The New Intimacy Economy</a></p></blockquote>
<p>To get “software with a personality” right, the personality has to be recognisably human. It needs to be the people who made the software <i>shining through</i> their creation, not painting themselves on top of it.</p>
<p>The bigger and more impersonal the software, the more subversive the personality needs to be. It needs to be something a manager would have said no to if they’d known about it before it shipped, not something they figured might make the product play better to Millennials. A spreadsheet that asks if you’ve had a nice day feels like a creepy marketing ploy. A flight simulator Easter Egg is a human being trying to reach you from behind the code.</p>
Turning 101000tag:fishbowl.pastiche.org,2015://1.18172015-12-03T08:00:43.379Z2015-12-03T08:00:43.379ZI’m not sure whether I prefer to be 0x28, or 101000.Charles Millerhttp://fishbowl.pastiche.org
<p>Happy birthday to me<br />
Happy birthday to me<br />
I've been <a href="/2002/12/03/december_3rd/">beating this joke to death since 2002</a><br />
Fuck fuck fuck fuck fuck fuck.</p>
Sithsplainingtag:fishbowl.pastiche.org,2015://1.18162015-12-02T07:48:23.645Z2015-12-02T07:48:23.645ZMany things in Star Wars don’t make sense, but this one turns out to be pretty straightforward. Leia is badass.Charles Millerhttp://fishbowl.pastiche.org
<blockquote><p>Leia suspects there's a tracking device on the Millennium Falcon and yet they fly straight to Yavin 4 anyway... — <a href="https://twitter.com/msharp/status/671943139922280448">@msharp</a></p></blockquote>
<p>Many things in Star Wars don’t make sense, but this one turns out to be pretty straightforward. Leia is badass.</p>
<p>Senator/Princess Leia Organa doesn’t know she is going to be rescued from captivity on the Death Star moments before she
is scheduled to be executed, but when it happens, and when the ship she is rescued in is allowed to get away
suspiciously easily, she thinks on her feet.</p>
<p>She knows the moons of Yavin have no sentient inhabitants outside the rebel base, and after seeing Alderaan blown up
she wants the Death Star as far away from civilian populations as she can get it.</p>
<p class="aside">OK, there were <a href="http://starwars.wikia.com/wiki/Yavin_13">two primitive pre-spaceflight species
on Yavin 13</a>, but the Empire was unlikely to pay them any notice.</p>
<p>She knows the clock is ticking on the value of the plans she stashed away in R2D2. The Empire knows exactly
what was stolen, and it is only a matter of time before they do exactly the same analysis that the Rebellion wants to do,
leaving the Rebellion with the embarrassing prospect of showing up to bomb an exhaust port that was already closed for emergency
maintenance.</p>
<p>She has seen that Admiral Moff Tarkin is drunk on the power trip he gets from being in charge of a moon-sized death
machine. She knows that given the choice between sending a couple of Star Destroyers to take out the Alliance base, and
blowing them up personally with his planet-killer, he’s going to choose the Big Round Fucking Laser.</p>
<p>But she knows she doesn’t want to give them too much time to think and maybe come up with a proportionate, sensible response.</p>
<p>So Leia figures either they’ll find something useful in the Death Star plans or they won’t. If they don’t, they’ve got
a pretty hairy evacuation in their future and they’ll need to find a new base, but they’ve at least got advance warning
the Empire is on its way. If they find something though, this is their
best and possibly only chance to get the Death Star to come to where <i>they</i> are tactically strongest,
without the rest of the Imperial fleet getting in the way.</p>
<p>And she thinks this through in the time it takes to tell Han what course to plot. Fuck yeah Leia.</p>
The Java Deserialization Bugtag:fishbowl.pastiche.org,2015://1.18152015-11-08T13:28:09.619Z2015-11-08T13:28:09.619ZArbitrary object deserialization (or marshalling, or un-pickling, whatever your language calls it) is inherently unsafe, and should never be performed on untrusted data.Charles Millerhttp://fishbowl.pastiche.org
<p>Yesterday this <a href="http://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/">account
of a serious vulnerability in most major Java application servers</a> crossed my Twitter feed a few times. The
description, while thorough, is written in <i>security researcher</i>, so
since it’s an important thing for developers to understand, I thought I would
rewrite the important bits in developer.</p>
<h3>What is the immediate bug?</h3>
<p>A custom deserialization method in Apache commons-collections contains
reflection logic that can be manipulated to execute arbitrary code.
Because of the way Java serialization works, this means that any application
that accepts untrusted data to deserialize, and that has commons-collections
in its classpath, can be exploited to run arbitrary code.</p>
<p>The immediate fix is to patch commons-collections so that it that does
not contain the exploitable code, a process made more difficult by just how
many different libraries and applications use how many different versions of
commons.</p>
<p>The immediate fix is also <i>utterly insufficient</i>. It’s like finding
your first XSS bug in a program that has never cared about XSS before, patching it,
and then thinking “Phew, I’m safe.”</p>
<h3>So what is the real problem?</h3>
<p>The problem, described in the talk the exploit was first raised in —
<a href="http://www.slideshare.net/frohoff1/appseccali-2015-marshalling-pickles">Marshalling
Pickles</a> — is that arbitrary object deserialization (or marshalling, or
un-pickling, whatever your language calls it) is inherently unsafe, and
should never be performed on untrusted data.</p>
<p class="aside">This is in no way unique to Java. Any language that allows
the “un-pickling” of arbitrary object types can fall victim to this class
of vulnerability. For example, the same issue with YAML was
<a href="http://blog.codeclimate.com/blog/2013/01/10/rails-remote-code-execution-vulnerability-explained/">used
as a vector to exploit Ruby on Rails</a>.</p>
<p>The way this kind of serialization works, the serialization format describes
the objects that it contains, and the raw data that needs to be pushed into
those objects. Because this happens at read time, before the surrounding
program gets a chance to verify these are actually the objects it is looking
for, this means that a stream of serialized objects could cause the environment
to load any object that is serializable, and populate it with any data that is
valid for that object.</p>
<p>This means that if there is <i>any object reachable from your runtime</i>
that declares itself serializable and could be fooled into doing something
bad by malicious data, then it can be exploited through deserialization. This
is a mind-bogglingly enormous amount of potentially vulnerable and mostly
un-audited code.</p>
<p>Deserialization vulnerabilities are a class of bug like XSS or SQL Injection.
It just takes one careless bit of code to ruin your day, and far too many
people writing that code aren’t even aware of the problem. Combine this with
the fact that the code being exploited could be hiding inside any of the
probably millions of third-party classes in your application, and you’re in
for a bad time.</p>
<p>Your best fix is just not to risk it in the first place. Don’t deserialize
untrusted data.</p>
<h3>Mitigations</h3>
<p>The mitigation for this class of vulnerability is to reduce the
surface area available to attack. If only a limited number of objects can be
reached from deserialization, those objects can be carefully audited to make
sure they’re safe, and adding a new random library to your system won’t
unexpectedly make you vulnerable. For example, Python’s YAML implementation
has a <code>safe_load</code> method that limits object deserialization to
a small set of known objects, essentially reducing it to a JSON-like format.
</p>
<p>Your best bet
in Java is not to use Java serialization unless you absolutely trust whoever
is producing the data. If you really want to use serialization, you can
<a href="http://www.ibm.com/developerworks/library/se-lookahead/">limit
the objects available to be deserialized by overriding the
<code>resolveClass</code> method on <code>objectInputStream</code></a>.
This way you can ensure only objects you have verified are safe will be
populated during deserialization.</p>
<p>Or just don't use serialization for data transfer. Nine times out of ten,
tightly coupling your wire format with your object model isn’t something
future maintainers of your system are going to thank you for.</p>
<p class="aside">Edited November 9 to add the reference to the developerWorks
Look-Ahead Deserialization article, after it was pointed out to me by a
couple of different people.</p>
What the Facebook copyright hoax says about ustag:fishbowl.pastiche.org,2015://1.18142015-09-30T21:54:13.709Z2015-09-30T21:54:13.709ZTo members, these sites are vital means of maintaining contact with friends and loved ones, of not feeling left out of important parts of their lives.Charles Millerhttp://fishbowl.pastiche.org
<p>My friends on Facebook are generally a tech-literate and cynical bunch, so the ratio of people who fell for the
recent spate of <a href="http://gawker.com/5963225/that-facebook-copyright-thing-is-meaningless-and-you-should-stop-sharing-it/1733542985">“Repost
this legalese to regain control of your content”</a> chain-mail hoaxes vs the people who have posted sarcastic
reactions to it is about one to twelve.</p>
<p>And that bugs me.</p>
<p>We (the tech industry, but more broadly society) have created these Internet agoras. To members,
these sites are vital means of maintaining contact with friends and loved ones, of not feeling left out of important
parts of their lives. But the same people will grasp at the most tenuous of straws if it gives them a slight hope that they
might claw back some sense of ownership, safety and control.</p>
<p>Every time a social media site changes its defaults, loosens its privacy settings or tightens its licensing,
we tend to take lack of action by its members as tacit acceptance that privacy and ownership just don't matter.
Hoaxes like this tell us otherwise. People feel trapped and helpless in a complex, baffling system. They want
a way to assert control over their online lives, and they don't understand why it's not as simple and obvious as
saying “I wrote this. I took these photos. They are mine.”</p>
My Blogging Workflowtag:fishbowl.pastiche.org,2015://1.18132015-07-07T18:31:58.524Z2015-07-07T18:31:58.524ZStep 5: GOTO 3Charles Millerhttp://fishbowl.pastiche.org
<img alt="" src="/2015/07/08/my_blogging_workflow/my-blogging-workflow.png" style="margin: auto" title="Also, discovering I was still committing things to my personal git projects as charles@atlassian.com" width="500" height="121">
<ol>
<li>Write first draft</li>
<li>Publish</li>
<li>Find a dozen things wrong with published post, frantically fix them before too many people read the article.</li>
<li>Re-publish</li>
<li>GOTO 3</li>
</ol>
<p class="aside">Number of post-publication edits for this post: 4</p>
The Death of Bloggingtag:fishbowl.pastiche.org,2015://1.18122015-07-07T16:37:08.726Z2015-07-07T16:37:08.726ZSure, the component parts of blogging are everywhere now. The Internet is drowning in self-publishing, link-sharing, articles scrolling by in reverse-chronological order. But somewhere along the way, the soul of blogging was lost.Charles Millerhttp://fishbowl.pastiche.org
<p>Remember back in 2003 when blogging was going to take over the world? When we were writing <a href="http://www.cluetrain.com">odes to blogging</a>, building <a href="http://www.hyperorg.com/blogger/2002/08/29/how-to-get-into-the-daypop-top-40/">popular tools to map the blogsphere</a>, actually <i>using</i> the word blogosphere with a mostly straight face, and wringing our hands over every <a href="/2003/07/27/why_im_not_afraid_of_aol_weblogs/">new entrant in the field</a> and every <a href="/2003/10/14/google_weblogs_and_the_end_of_the_world/">Google index update</a>?</p>
<p>Sure, the <i>component parts</i> of blogging are everywhere now. The Internet is drowning in self-publishing, link-sharing, articles scrolling by in reverse-chronological order. It's no coincidence that the most popular CMS on the public Internet, <a href="http://trends.builtwith.com/cms">by a pretty ridiculous margin</a> is <a href="https://wordpress.org">a blogging platform</a>.</p>
<p>But somewhere around a decade ago, the soul of blogging died. The heterogeneous community using syndication technologies to create collaboratively-filtered networks of trust and attention between personally-curated websites, forming spontaneous micro-communities in the negative space between them? That’s the thing we were all saying would take over the world, and instead “blogging” dwindled back to being a feature of corporate websites, a format for online journalism, and a hobby of techies who like running their own web pages.</p>
<p><a href="/2015/06/19/deletionism/">Going back over fourteen years of my own blog history</a> was an interesting lesson in how <i>this</i> blog changed over the years. There are entire classes of post that filled the pages of this site in 2002, but that were not to be seen five years later. Some of this was due to me changing behind the blog. Many were due to the Internet changing around it.</p>
<p>So what happened to blogging?</p>
<h2>Digg stole its community.</h2>
<p class="aside">And then reddit and Hacker News, but Digg did it first.</p>
<p>There were popular public link aggregators before Digg, but they were either <a href="http://slashdot.org">heavily curated</a> (Slashdot was, more than anything, a blogging pioneer) or <a href="http://www.kuro5hin.org">deafeningly self-important</a>.</p>
<p>Kuro5hin demanded users share substantial things they wrote themselves, everything else was “Mindless Link Propagation”. Digg took MLP and changed the shape of the Internet with it.</p>
<p>In doing so, Digg created a devoted platform for one of the core activities, and most common entry-points of blogging: holding conversations about things written elsewhere. Their platform was far easier to get involved in, far easier to set up, and solved that one big question of blogging newbies: “How do I get anyone to even read what I’m writing?” with centralisation and gamification.</p>
<p>Bloggers didn't jump ship for Digg, but equally Digg didn't contribute to blogging. Visitors from aggregation sites notoriously never looked deeper into the sites they were visiting than the single article that was linked, and the burst of syndication subscribers a blogger would normally get if one of the hubs of their community linked to them just never came from aggregation sites.</p>
<p>Bloggers did, however, find themselves having to take part in these communities. At first because more often than not aggregators were where the conversation was happening about the things they were writing, and writing about. Later, because they’re where readers come from. For many people trying to make money writing on the Internet today, <a href="http://www.dailydot.com/esports/ongamers-reddit-ban-kim-rom-slasher/">links from reddit are how you survive</a>.</p>
<p>For their part, aggregation site users tend to hold bloggers in the lowest of low esteem, even when linking to them. Blogging is narcissistic. Who are they to remain aloof from the community like that, to share links and posts on their own website instead of contributing them to the centralised collective?</p>
<p>It is this sense of community that even turned some aggregators into creators, <i>beyond</i> the surfacing of links or crowdsourced comments about them. Like “Ask Slashdot” before it, some of the most popular communities on reddit are built around user-contributed posts. Overall, though, links still rule the site.</p>
<p class="aside">Users of aggregators tend to reserve their greatest vitriol for sites that aggregate or republish things from <i>their</i> website, whether it be something that was original to the site, or even if it’s just a link they found “first”. For sites built around monetising other sites’ labour, aggregator users get mighty tetchy when the same thing is done to them.</p>
<h2>Twitter stole its small-talk.</h2>
<p>Bloggers might not have jumped ship for aggregators, but they dove into Twitter head first.</p>
<p>It takes a lot of time and inspiration to write a long-form article, so most blogs filled the gaps between with links, funny pictures they had found around the Internet, short pithy commentary, snippets of conversation, interesting quotes, jokes, and in one case from a blogger now worth more money than you can count, an enthusiastic two sentence review of the porn site “Bang Bus”.</p>
<p>With Twitter you could do that on your phone, have it pushed to your friends/subscribers in real time, and have the same done back to you with equal ease. It wasn't even a competition.</p>
<p>Twitter still has the “How do I get people to notice me?” problem, and later developed the even more disturbing <a href="http://www.newyorker.com/tech/elements/twitters-free-speech-problem">“How do I get people to <i>stop</i> noticing me?” problem</a>, but that didn't stop it sucking the remaining air out of the blogosphere in the course of surprisingly few months.</p>
<p class="aside">What about Facebook, Instagram, Pinterest and the like? Well, from my perspective they weren't so much the successors to blogging as they were the successors to Livejournal.</p>
<h2>Tumblr stole its future.</h2>
<p>A curmudgeon might say I should also file Tumblr under “successors to Livejournal”, but I disagree. Tumblr sites tend far less towards being amorphous personal diaries aimed square at the author’s existing social network, and far more towards expressing the author’s interests in public, and joining the larger community that arises around them.</p>
<p>From one perspective, Tumblr <i>is</i> blogging. At <a href="https://www.tumblr.com/press">today’s count</a> they host 244 million blogs making a total of 81 million posts per day. That’s about four posts per year for every human being on Earth. Users can contribute their own posts, but just as importantly they can reblog and comment, forming spontaneous, distributed communities of interest around (and in the spaces between) the things they share from others.</p>
<p>From another perspective, Tumblr stole blogging. The syndication and sharing tools, the communities built within Tumblr, everything stops dead at the website's border. The tools seem almost contemptuous of the web as it exists outside Tumblr. <a href="http://www.jwz.org/blog/2012/01/jon-mitchell-google-hates-the-internet/">To quote JWZ</a>:</p>
<blockquote><p>[Tumblr pioneered] showing the entire thread of attributions by default, and emphasizing the first and last -- but stopping cold at the walls of the Tumblr garden. To link to an actual creator, you have to take an extra step, so nobody bothers.</p></blockquote>
<p>These may seem like small glitches, but the aggregate effect is huge. They’re what makes the “Tumblr Community” a real thing people talk about in a way you'd never hear about, say, people who happen to host their sites with Wordpress.</p>
<h2>Centralisation and lock-in won.</h2>
<p>In the end, the distributed, do-it-yourself web was just too hard. Not just for newcomers facing a mountainous barrier to entry, but even to incumbents looking to shave a few sources of frustration from their day. Just ask anyone who excitedly built RSS/Atom syndication into their product in the early 2000s, only to deprecated the feature gradually into the power-user margin over the ensuing decade.</p>
<p>In every case, a closed, proprietary system took some ingredient of the self-publishing crack bloggers discovered in the early 2000s and distilled it into a product that was easier to use, and that people were willing to adopt even though it meant losing the freedom of openness, interoperability and owning your own words.<p>
<p>Leaving behind a landscape of those for whom that sacrifice either was not commercially attractive, or those of us who are just sufficiently set in our ways that the idea of <i>not</i> running our own website feels alien.</p>
Deletionismtag:fishbowl.pastiche.org,2015://1.18112015-06-19T03:58:54.372Z2015-06-19T03:58:54.372ZOnce upon a time, I was dead-set against people deleting things from blogs. After ten years though, you find that parts of the site don’t smell so good…Charles Millerhttp://fishbowl.pastiche.org
<p>Ask me ten years ago, and I'd say a blog entry, once published, should remain that way. Oh wait, <a href="/2002/09/26/thu_26_sep_2002_135657_gmt/" title="To Delete or not to delete? - The Fishbowl">I actually <i>did</i> say that</a>:
<blockquote><p> I try never to delete anything substantive. Attempting to un-say something by deleting it is really just a case of hiding the evidence. I'd much rather correct myself out in the open than pretend I was never wrong in the first place.</p></blockquote>
<p>The reasons not to delete come down to:</p>
<ul><li>Not wanting to break the web by 404-ing a page</li>
<li>Wanting to be honest about what you’ve said in public</li>
<li>Keeping a record of who you were at some moment in time.</li></ul>
<p>The counter-arguments are:</p>
<ul><li>The web was designed to break. And anyway, the stuff worth deleting is usually the stuff nobody’s linking to.</li>
<li>Just how long does a <i>mea culpa</i> have to stand before it becomes self-indulgent?</li>
<li>Unless you’re noteworthy and dead, or celebrity and alive, the audience for your years-old personal diaries is particularly limited.</li>
<li>Publishing on the web isn’t just something you do, and then have done. It’s an ongoing process. A website isn’t just a collection of pages, it’s a work that is both always complete, and always evolving. And every work can do with the occasional read-through with red pen in hand.</li></ul>
<p>That last point is the most compelling one. I was publishing a website full of things that, however apt they were at the time to the audience they were published for, just aren’t worth reading today.</p>
<p>So to cut a long story short, last weekend I un-published about 700 of the previously 1800 posts on this blog; things that were no longer correct, things that were no longer relevant, things that were no longer interesting even as moments in time, and <a href="/2014/09/17/four_stories_with_a_moral/" title="The Fishbowl: Four Stories and a Moral">things that I no longer feel comfortable being associated with</a>. I don't think anything that was removed will be particularly missed, and as a whole the blog is a better experience for readers without them.</p>
<p>The weirdest thing about deleting 700 blog posts is realising you had 1800 to start with. Although to be fair, 1750 of them were Cure lyrics drunk-posted to Livejournal.</p>
<h3>Under the hood</h3>
<p>It's a testament to the resilience of Moveable Type that in the eleven years since I first installed it to run this blog, I've upgraded it exactly twice. If I’d tried that with the competition, I <a href="http://www.bash.org/?949214" title="bash.org: Wordpress is an unauthenticated remote shell that, as a useful side feature, also contains a blog.">doubt I’d have had nearly as smooth a ride</a>.</p>
<p>Moveable Type got me through multiple front-page appearances on Digg, reddit, Hacker News and Daring Fireball without a hitch, or at least would have if I hadn't turned out to be woefully incompetent at configuring Apache for the simple task of serving static files.</p>
<p>But as they say, all good things must come to end. Preferably with Q showing up in a time travel episode.</p>
<p>I replaced Moveable Type with a couple of scripts that publish a static site from a git repo, fully aware that I’m doing this at least five years after it became trendy. The site should look mostly identical, except comments and trackbacks haven't been migrated. They’re in the repo, but I'm inclined to let them stay there.</p>
Fuck Game of Thronestag:fishbowl.pastiche.org,2015://1.18102015-06-08T04:35:45Z2015-06-08T04:35:45ZThere are increasingly flimsy justifications for the horrors of Game of Thrones. They motivate character A. Or they open up space for character B. But in the end it's obvious that it's all about providing the now-mandated quota of shock.Charles Millerhttp://fishbowl.pastiche.org
<p>Look, bad things happen to people in fiction just like bad things happen in real life. And at least the people in fiction aren't real so it didn't really happen to them.</p>
<p>I get that.</p>
<p>And you can have great entertainment where bad things happen to bad people, or bad things happen to good people, or bad things happen to indifferent people who just happened to be in the wrong place at the wrong time.</p>
<p>I get that too.</p>
<p>But at some point you find yourself sitting on a couch watching a drawn-out scene where a child is burned alive screaming over and over for her parents to save her, and you think “Why the fuck am I still watching this show?”</p>
<p>Bad things happen in real life. Bad things have happened throughout history. So what, I'm <em>watching television</em>. If I wanted to experience the reality of a brutal, lawless campaign for supremacy between tribal warlords, there are plenty of places in the world I could go to see that <em>today</em>. I wouldn't survive very long, but at least I'd get what I deserved for my attempt at misery tourism.</p>
<p>Bad things happen in good drama, too. But drama comes with a contract. The bad things are there because they are contributing to something greater. Something that can let you learn, or understand, or experience something you otherwise wouldn't have; leading you out the other side glad that you put yourself through the ordeal, albeit sometimes begrudgingly.</p>
<p>To refresh our memories, here's how <a href="/2013/07/07/killing_in_the_name_of/">George R. R. Martin explained the Red Wedding</a>:</p>
<blockquote><p>I killed Ned in the first book and it shocked a lot of people. I killed Ned because everybody thinks he's the hero and that, sure, he's going to get into trouble, but then he'll somehow get out of it. The next predictable thing is to think his eldest son is going to rise up and avenge his father. And everybody is going to expect that. So immediately [killing Robb] became the next thing I had to do.</p></blockquote>
<p>There are increasingly flimsy justifications for the horrors of Game of Thrones. They motivate character A. Or they open up space for character B. But in the end it's obvious that it's really about providing the now-mandated quota of shock, and giving the writers some hipster cred for subverting fantasy tropes.</p>
<p>I did not enjoy watching Sansa Stark’s rape. I did not enjoy watching Shireen Baratheon burned at the stake.</p>
<p>If that's what you want to watch TV for, go for it. But I'm out.</p>
Why does it matter that Future is a monad?tag:fishbowl.pastiche.org,2015://1.18092015-06-02T19:36:33Z2015-06-02T19:36:33ZEither and Promises/Futures are useful and I’ll use them next time they’re appropriate. But outside Haskell does their monad-ness matter?Charles Millerhttp://fishbowl.pastiche.org
<h2>Seen on Twitter:</h2>
<blockquote><p>Either and Promises/Futures are useful and I’ll use them next time they’re appropriate. But outside Haskell does their monad-ness matter?</p></blockquote>
<p class="aside">All code below is written in some made-up Java-like syntax, and inevitably contains bugs/typos. I'm also saying "point/flatMap" instead of "pure/return/bind" because that's my audience. I also use "is a" with reckless abandon. Any correspondance with anything that either be programatically or mathematically useful is coincidental</p>
<h2>What is a monad? A refresher.</h2>
<p>A monad is something that implements "point" and "flatMap" correctly.</p>
<p>I just made a mathematician scream in pain, but bear with me on this one. Most definitions of monads in programming start with the stuff they can do—sequence computations, thread state through a purely functional program, allow functional <span class="caps">IO.</span> This is like explaining the Rubiks Cube by working backwards from how to solve one.</p>
<p>A monad is something that implements "point" and "flatMap" correctly.</p>
<h2>So if this thing implements point and flatMap correctly, why do I care it's a monad?</h2>
<h3>Because "correctly" is defined by the monad laws.</h3>
<ol>
<li>If you put something in a monad with point, that's what comes out in flatMap. <b>point(a).flatMap(f) === f(a)</b></li>
<li>If you pass flatMap a function that just points the same value into another monad instance, nothing happens. <b>m.flatMap(a -> point(a)) === m</b></li>
<li>You can compose multiple flatMaps into a single function without changing their behaviour. <b>m.flatMap(f).flatMap(g) === m.flatMap(a -> f(a).flatMap(g))</b></li>
</ol>
<p>If you don't understand these laws, you don't understand what flatMap does. If you understand these laws, you <em>already understand what a monad is</em>. Saying "Foo implements flatMap correctly" is the same as saying "Foo is a monad", except you're using eighteen extra characters to avoid the five that scare you.</p>
<h3>Because being a monad gives you stuff for free.</h3>
<p>If you have something with a working point and flatMap (i.e. a monad), then you know that at least one correct implementation of map() is <b>map(f) = flatMap(a -> point(f(a))</b>, because the monad laws don't allow that function to do anything else.</p>
<p>You also get join(), which flattens out nested monads: <b>join(m) = m.flatMap(a -> a)</b> will turn Some(Some(3)) into Some(3).</p>
<p>You get sequence(), which takes a list of monads of A, and returns you a monad of a list of A's: <b>sequence(l) = l.foldRight(point(List()))((m, ml) -> m.flatMap(x -> ml.flatMap(y -> point(x :: y))))</b> will turn [Future(x), Future(y)] into Future([x, y]).</p>
<p>And so on. </p>
<p>Knowing that Either is a monad means knowing that all the tools that work on a monad will work on Either. And when you learn that Future is a monad too, all the things you learned that worked on Either because it's a monad, you'll know will work on Future too.</p>
<h3>Because how do you know it implements flatMap correctly?</h3>
<p>If something has a flatMap() but doesn't obey the monad laws, developers no longer get the assurance that any of the things you'd normally do with flatMap() (like the functions above) will work.</p>
<p>There are plenty of law-breaking implementations of flatMap out there, possibly <em>because</em> people shy away from the M-word. Calling things what they are (is a monad, isn't a monad) gives us a vocabulary to explain why one of these things is not like the other. If you're implementing a flatMap() or its equivalent, you'd better understand what it means to be a monad or you'll be lying to the consumers of your <span class="caps">API.</span></p>
<h2>But Monad is an opaque term of art!</h2>
<p>So, kind of like "Scrum", "ORM" or "Thread"?</p>
<p>Or, for that matter, "Object"?</p>
<h2>In summary:</h2>
<p>As developers, we do a better job when we understand the abstractions we're working with, how they function, and how they can be reused in different contexts.</p>
<p>Think of the most obvious monads that have started showing up in every language<sup>1</sup> over the last few years: List, Future, Option, Either. They <em>feel</em> similar, but what do they all have in common? Option and Either <em>kind of</em> do similar things, but not really. An Option is kind of like a zero-or-one element list, but not really. And even though Option and Either are kind of similar, and Option and List are kind of similar, that doesn't make Either and List similar in the same way at all! And a Future, well, er…</p>
<p>The thing they have in common is <em>they're monads</em>.</p>
<p><hr /></p>
<p><sup>1</sup> Well, most languages. After finding great branding success with <a href="http://www.golang-book.com/books/intro/10">Goroutines</a>, Go's developers realised they had to do everything possible to block any proposed enhancement of the type system that would allow the introduction of "Gonads".</p>