
I have been writing here about AI-induced enshittification (extending a term originally due to Cory Doctorow) regularly for the last couple years, using that specific term for the first time almost exactly two years ago to the day:
The Imminent Enshittification of the Internet

My spelling checker doesn’t know the word enshittification, but I didn’t make it up (as far as I know Cory Doctorow did, thank you, Cory), and I am pretty sure you know what I mean by it, whether or not it happens to be in your training corpus.
Needless to say, David Simon’s quote above was prescient.
§
My efforts to enshrine Doctorow’s term (enshittification) in AI context haven’t thus far been entirely successful, but a slightly more polite term does seem to be gaining traction, AI slop. Here, for example, is part of what Axios had to say about AI slop in their newsletter this morning:

I have of course also worried frequently about what enshittification (and AI slop) might do to science:
That has, of course, only gotten worse. Much worse, as Science reported today.

§
The problem of course is that some significant fraction of that science will be wrong, because of the inevitable tendency of LLMs to hallucinate.
But science is far from alone. AI slop is everywhere. It’s in journalism, law, music (which Nick Cave once called “a grotesque mockery of what it is to be human”), politics, education, you name it. AI Slop can now even follow you to the grave, as The Washington Post reported this morning.

{The Washington Post is a fine one to talk, given their new Ripple project.]
§
In an even earlier essay, in February 2023, before I had started to extend Doctorow’s term enshittification into AI, I argued that the spoiling of the internet with AI-generated BS might actually be Google’s biggest concern —bigger than OpenAI potentially taking a slice of search — concluding that
Cesspools of automatically-generated fake websites, rather than ChatGPT search, may ultimately come to be the single biggest threat that Google ever faces. After all, if users are left sifting through sewers full of useless misinformation, the value of search would go to zero—potentially killing the company.
For the company that invented Transformers—the major technical advance underlying the large language model revolution—that would be a strange irony indeed.
Clearly, we are not out of the woods yet. To the contrary. The AI industry magic sauce of “scaling” hasn’t made the enshittification situation better; it has made it worse.
Gary Marcus interrupted the first day of his holiday to bring you these words. He still prefers “enshittification” to “slop”, but the problem is both serious and ubiquitous no matter what you call it.