Any time I write anything that mentions AI it’s inevitable that someone will object to the very usage of the term.
Strawman: “Don’t call it AI! It’s not actually intelligent—it’s just spicy autocomplete.”
— Simon Willison: It’s OK to call it Artificial Intelligence
"What is AI, really?" is something I've been asking myself a lot over the last year.
Technologies like ChatGPT and Midjourney get the AI label, but we understand they're not really intelligent. Boosters say they are a step on the way to the holy grail of general intelligence. Detractors dismiss them as statistical trickery: spicy autocorrect. Both agree they're not (and here the boosters will add "yet") real AI.
Meanwhile a clear definition of that grail—Artificial General Intelligence (AGI)—remains elusive, hand-wavey, and "we'll know it when we see it".
I think it might have been Ed Zitron who said this. I wish I could remember, but the Internet has decayed to the point where finding out where I saw something less than a week ago is almost impossible. Which is itself distressing for many reasons. But to paraphrase.
Artificial Intelligence is an umbrella term for those things a human brain can do, but a computer can't. When we find a way to do one of those things in the computer, it stops being AI.
This definition took about a week to blow my mind.
Writing a textual description of an image, or its inverse, creating an image that matches a textual description, used to be the sole domain of the human brain. Research into building a computer with the same capabilities was, therefore, the province of AI.
Decades later, the combination of some truly amazing theoretical research and the availability of unprecedented amounts of computing power, gives us statistical models that can describe an image, or generate an image from a description. But they're not really intelligent.
But why are they not really intelligent?
Because while they picked one, admittedly one of the larger, stones out of the big bucket labelled "things a brain can do but a computer can't", there's still all those other stones that remain. It can recognise a cat in a photo, but it doesn't understand what a cat is, because "what a cat is" is a concept born of how we think; a reflection of our unique cognition.
This is why AGI is so hard to define. It's everything that remains in the bucket.
It's also why technologies that originate as AI are doomed to graduate and become not-AI. If you've taken this thing out of the bucket, but the bucket isn't empty, that thing you removed can't possibly be "intelligence". Intelligence must still be in the bucket somewhere.
When you ask a person to multiply six by seven, chances are their brains are doing something analogous to, but still very different from, an LLM. Some combination of learning a times table and reading Hitchhikers Guide to the Galaxy embedded the concept "forty-two" in our mind, and we recall it as the response connected most strongly to that question.
When you ask the same person to add 194 and 83, that associative way of thinking doesn't work, and our brain will fall back on an algorithm for addition we learned in junior school. Or it will inspire us to pick up a calculator. Computers, after all, have been better at arithmetic than human beings for about as long as they've existed.
This is why it's so jarring that we somehow managed to build a computer that is bad at maths and logic puzzles, inverting the science fiction trope where the robot can calculate your probability of dying in an asteroid field to ten decimal places, but struggles to write poetry. It was predictable, though. LLMs are notoriously bad at knowing when they don't know something, and even if they could understand the limits of their own understanding, they have nothing else, no other mode of thought, to fall back on when they reach them.
If I could make one usability change to modern chatbots it would be to train them to express uncertainty about their own output: to never say something "is" when they could instead say it might be.
This is why I'm skeptical that continuing to improve our technology for the statistical association of abstract concepts will somehow leapfrog us to "real" artificial intelligence. There are just so many things left in the bucket. Skepticism and the concept of truth. Creativity. Concrete reasoning. Abstraction. Connection to a physical existence.
And empathy. Good luck finding a tech entrepreneur or venture capitalist who will fund that one.