Bite-size thoughts about AI

by Charles Miller on January 8, 2023

Majikthise: By law the Quest for Ultimate Truth is quite clearly the inalienable prerogative of your working thinkers. Any bloody machine goes and actually finds it and we're straight out of a job aren't we? I mean what's the use of our sitting up all night saying there may…

Vroomfondel: Or may not be…

Majikthise: …or may not be, a God if this machine comes along next morning and gives you His telephone number?

Vroomfondel: We demand guaranteed rigidly defined areas of doubt and uncertainty.

– Douglas Adams. The Hitchhiker's Guide to the Galaxy, Fit the Fourth.

I claim no special knowledge or insight into the field of Artificial Intelligence (AI) or AI ethics. There are people out there doing real work in the field with far more informed and deeply-thought-through views than I.

I'm writing this because I'm interested in what I think about the subject, and writing it down helps me work out what that is. If anyone else is interested particularly in what Charles thinks, good for you.

You can't uninvent something. Once a thing exists, it's important to think how best we can fit it into the world.

Bias

It's trivial to demonstrate that the models behind current generation AI reflect, starkly, the biases and discrimination that pervade the world that trained it. The only solution we've got so far is to ask the AI not to show it too obviously (see Security below). That doesn't remove the bias from the model, just ensures it will express itself in less overt ways, like that relative who knows not to say those things about foreigners at the family dinner.

We're obsessed with making AI that's smarter than us, but if we don't make it better than us, it's going to keep algorithm-washing the worst things about us.

Lies

A researcher asking ChatGPT, today's trending Large Language Model (LLM), to cite its sources found it was happy to do so, by outputting plausible names and URLs for articles that never existed. A librarian reported a customer bringing in a list of books, suggested by GPT for research, that never existed.

For a lark I asked ChatGPT to write a newspaper article about Australian Prime Minister Anthony Albanese attending a Pixies concert at the Sydney Opera House. This is a real event, but every detail described in the generated article, down to confected quotes from crowd-members, was invented by a robot.

Even "invented" is poor terminology. Machine Learning (ML) models don't invent or create. They put words next to other words, or place pixels next to other pixels, that statistically might follow whatever they were prompted with.

Like a face appearing in patterns of bark on a tree, we mistake that process for something human.

An LLM can't distinguish truth from lies because it doesn't know what lies are. It doesn't "know" anything. It is the sum of the statistical connections of phenomena in its training set. It can't be taught what lying is because it possesses no intent. It doesn't contain any capacity for understanding, and there is no standard for an untruth that exists in a corpus of training data disconnected from the world that produced it.

And you know what? If I was writing an article about a Pixies concert with a pressing deadline, I'd be tempted to leave those quotes in. They're not attributed to anyone who might get upset, and they sound like things people would have said.

To make a good demo, we built an incredibly efficient lying machine and set it loose on the world.

Turing

Turing proposed his Imitation Game in 1950. Unable to define 'intelligence' usefully enough to measure artificial intelligence, he proposed a proxy: the ability to convincingly hold a conversation.

We have comprehensively proved the Turing Test is outdated and no longer useful. Chatbots are plausible enough to fool people, but demonstrably not intelligent. At the same time, the deeper truth of Turing that people will believe something is intelligent if it can convincingly mimic human conversation has been proven.

The Zombie Problem

A chess bot doesn't calculate its moves the way humans do, but it is still fundamentally playing chess. Why isn't a facsimile of conversation the same as conversation? A facsimile of intelligence the same as intelligence?

Chess has a clearly defined victory condition. So bootstrapped, AlphaZero could play itself and iterate on that experience. There is no "win condition" for general intelligence that we yet know how to encode in a model. Midjourney can't teach itself what hands look like unless we give it success criteria that already understand what hands are.

Reinforcement

We're reaching a turning-point where ML tools are becoming common enough that the next generation of models are trained, significantly, on the output of the previous generation. This trend will only increase as people use learning models to produce graphics or text without necessarily labeling it as such.

This can be positive reinforcement (people will keep more of the 'good' output than the 'bad'), but errors and glitches generated by the models will also be re-ingested, reproduced and re-ingested again. So look to AI "learning" increasingly weird (or bad; see Bias) things over time.

A friend on Twitter compared this to how we have to make certain radiation-monitoring devices from scavenged pre-nuclear-era metals.

Stagnation

How will the AI learn to write about new things, when the new things are being written by an AI?

Security

We've come a long way since Tay, the Microsoft chatbot that was taken offline after trolls trained it to spew racism, and at the same time, not very far. This generation of machine-learning image-generators and chatbots are released with safeguards that theoretically prevent them from saying things or emitting images that might reflect badly on their creators.

The safeguards don't erase the capacity to generate those images or say those things from the tool's underlying model, and often the safeguards to prevent them coming out are also built on top of that model either as extra input that primes the model not to do the bad thing, or a filter that tries to detect the bad thing in the output before it reaches a human.

Frequently you can bypass these constraints just by telling the model, politely, not to apply them. The Laws of Robotics this is not.

This raises gnarly questions of how you secure a system that is a black box, and that you can only prevent it doing the things you don't want it to do by asking politely, and hoping the instruction sticks through adversarial inputs.

Copyright

Engaging on the level of how our current understanding of intellectual property intersects with the training and use of machine learning models can often get mired in technicalities and miss the point.

Even if these models did learn the way humans do, and what they produce is analogous to a human using their experience to create something new (neither is true), the fact that the product of this learning and its capacity to create can be owned, bought and sold, in perpetuity by commercial enterprises makes it entirely different to human creativity.

My opinion is that the models themselves should be considered derivative works of all the data that went into training them, and that training a model should require the specific affirmative consent of the creators of that data (i.e. you can't just upload your code to github and find out years later that some obscure part of the ToS allowed them to train a code-writing bot with it).

Sure, this makes life harder for people building big general-purpose models. But they're usually either big companies, or small companies funded by big Venture Capital firms, looking to profit from vacuuming up public and private data (popular text and graphics AIs ChatGPT and Midjourney, for example, are both owned by a VC-backed startup currently valued at $10 billion dollars). So make the bastards pay for it.

Overpromising

When a technology is in the early breakthrough stage, you can draw an ever-rising line through the innovation and see a point on the horizon where "magic trick" turns into "actually magic". This point is very rarely reached. We hit the point where the problem gets really hard again, and the curve flattens.

This is why fully-autonomous cars feel less plausible today than they did a decade ago. Or why the learning model that beat the world's best at Chess and Go could only manage "kind of good" at Starcraft.

When thinking of the future of a technology it's good to separate the plausible, that it will get better at things it's demonstrated it is good at, from the still speculative, that it will develop new capabilities it has not yet shown.

A cadre of fabulists imagine a future where the exponential growth of artificial intelligence creates an omnipotent, omniscient AI. When you look closer, most of it is a combination of nerd wish-fulfilment and libertarian fantasy.

A far more likely future is the one where even if general self-reinforcing AI is possible, all we discover are the depressing limits of what we call intelligence.

Work

Machine Learning is already a direct threat to artists, especially those who make a modest income doing commissions. Even if we solve the problem of obtaining consent from and compensating those whose works trained the models (See: Copyright), this isn't going to change.

Text models are starting to be able to imitate things we currently pay people to do, from explaining complex subjects to to writing software. They're often wrong (see: Lies), but they are plausible and engaging. They will get better, probably more so at the latter than the former (See: Overpromising).

If you're involved in creative work, it seems very likely that even with just incremental improvements to the existing technology, the cheapest way to reach some result (and we're generally not paid to do it the expensive way) will increasingly be to seek the assistance of an AI.

What does software development look like if you can type "Write a program that does [thing]" into a textbox, for some very wide range of values for [thing], and your job is ensuring it does that thing properly? Do we finally stop inventing new programming languages and frameworks because getting enough humans to write enough beginner-level code to seed the model isn't a productive use of our time?

Spam

The public Internet is already full of cheaply written text devised to squat search engine terms and monetize clicks. Now the text can be created for cents per page by a computer, instead of cents per word by a human.

User generated content sites like Wikipedia and Stack Overflow are already having to fight the appeal of easy to generate, low-effort, plausibly inaccurate "user-generated" content.

Expect the same to happen to image search results, but at least that will be an improvement on all those registration-walled links to Pinterest.

Endnote

This post was written without the assistance of ML. In retrospect, I squandered a great opportunity to be meta.

Previously: Usenet Spam: a Slice of History