My DM’s this morning were filled with journalists and friends asking me about a story that increasingly dubious New York Times ran yesterday. This one:

I am not going to read Kevin’s column, and I don’t think you need to, either. What he wrote about coding was wildly naive, and he couldn’t be bothered to ask whether the system would even extend to Pac-Man , let alone debugging. His near-religious endorsement of the imminence of AGI kind of speaks for itself, as does his apparent aversion to consulting seriously with experts who might disagree with his panglossian takes. His shtick is write to with awe, and to think uncritically; I have honestly had enough.
§
That said, we have seen this movie before. The last time I wrote it about, in June 2022, I called it nonsense on stilts.
For those who have forgotten the story, an AI safety engineer at Google, Blake Lemoine, felt that an LLM-based system that nobody remembers anymore called Lamda had achieved “sentience”. The eternally-sharp Abeba Birhane nailed it then with a tweet that she could equally repost today, in Roose’s honor:

The essence of my own argument then, back in June 2022, applies as much to today’s LLM’s as those of three years ago:
To be sentient is to be aware of yourself in the world; LaMDA simply isn’t. It’s just an illusion, in the grand history of ELIZA a 1965 piece of software that pretended to be a therapist (managing to fool some humans into thinking it was human), and Eugene Goostman, a wise-cracking 13-year-old-boy impersonating chatbot that won a scaled-down version of the Turing Test. None of the software in either of those systems has survived in modern efforts at “artificial general intelligence”, and I am not sure that LaMDA and its cousins will play any important role in the future of AI, either. What these systems do, no more and no less, is to put together sequences of words, but without any coherent understanding of the world behind them, like foreign language Scrabble players who use English words as point-scoring tools, without any clue about what that mean.
I am not saying that no software ever could connects its digital bits to the world, a la one reading of John Searle’s infamous Chinese Room thought experiment. Turn-by-turn navigations systems, for example, connect their bits to the world just fine.
Software like LaMDA simply doesn’t; it doesn’t even try to connect to the world at large, it just tries to be the best version of autocomplete it can be, by predicting what words best fit a given context. Roger Moore made this point beautifully a couple weeks ago, critique systems like LaMDA that are known as “language models”, and making the point that they don’t understand language in the sense of relating sentences to the world, but just sequences of words to one another.
Search and replace LaMDA with Claude, and it all still applies. I still don’t remotely see an argument that current models are sentient, nor any argument that scaling a model makes it more conscious, even if it can mimic more language from humans discussing consciousness. Claude does what LaMDA does better because it has more data, but I don’t see any really argument that Claude is any more sentient than a web browser.
Eric Byrnjolffson is often more bullish on AI than I am, but his 2022 commentary on the whole LaMDA affair, too, could be reposted today without changing a word:

Sad that The New York Times fell for it.
§
You can look at what Anthropic is doing (evaluating the “welfare” of its models) from the standpoint of the philosophy of consciousness (asking very reasonably questions like what would count as consciousness? and how we would measure it in an animal or a machine?, and so on), but I think it is better to look at what is happening from the perspective of of commerce. Anthropic is a business (which incidentally neglects to respect the rights of artist and writers who work they nick). I suspect the real move here is simply, as it so often is, to hype the product — basically by saying, hey, look at how smart our product is, it’s so smart we need to give it rights.
Just wait ‘til you see our spreadsheets!
Gary Marcus is shaking his head.