
Anthropic’s CEO Dario Amodei has learned from the masters that the way to get press is to say outrageous things. Podcasters are particularly vulnerable. (The New York Times apparently loves this kind of thing too, but I’ll get to that later.)
A lot of prominent podcasters eat these magical pronouncements up; they want to have been there, in the front row, when AGI happens, and they rarely utter a skeptical word.
Instead, they simply repeat these prognostications (which are really optimistic dates largely pulled out of hats) as fact. Here for example is the popular Dwarkesh Patel, platforming a wildly optimistic claim from Amodei, in this case that AGI would come in 2-3 years.
Dwarkesh: When you add all this together, like your estimate of when we get something kind of human level, what does that look like?
Dario Amodei: I mean, again, it depends on the thresholds. You know, in terms of someone looks at these, the model, and even if you talk to it for an hour or so, it’s basically like a generally well-educated human. That could be not very far away at all, I think.
That could happen in two or three years. If I look at, again, I think the main thing that would stop it would be if we hit certain, and we have internal tests for safety thresholds and stuff like that. If a company or the industry decides to slow down or we’re able to get the government institute restrictions that moderate the rate of progress for safety reasons, that would be the main reason it wouldn’t happen.
Sounds plausible, to someone outside the field, and Dario lays it on thick. Of course we could get AGI soon he says. In fact, he implies, the only thing that would stop us from getting there would be pesky concerns about AI safety. The tech itself, he implies, we have got that covered.
Dwarkesh loved this bit so much he opened the episode it, with the 2-3 year prediction. Didn’t push back at all. Instead he framed Dario as a seer (“You have been one of the very few people who has seen scaling coming for years, more than five years”).
Just one problem: that prediction was made in August, of 2023.

§
Nowadays, Dario has quietly slipped his predictions back. AGI isn’t here, so now he mostly talks 2026-2027 rather than 2025-2026. Anything he might have said about 2025 has been memoryholed, with the aid of the press. AGI is obviously not going to be here this year, so Dario’s predictions go quietly backwards, in a delaying tactic he may well have learned from Elon Musk, who has been doing that year-in and year-out for a decade.

§
A skeptic, especially one who had seen this movie before, might reasonably say, “hey, is there any chance that Dario is just trying to drive up the valuation of his company by always hinting AGI being 2-3 years away?”
But that’s not how hypey podcasters, masquerading as real journalists, roll. Certainly not the Hard Fork lads who work for The New York Times (Kevin Roose and Casey Newton, the latter of whom is dating one of Anthropic’s employees). On their recent episode that agains platforms Amodei, they never mentioned the earlier prediction. (Newton promised not be compromised by his apparent conflict of interest, and the Times, which is starting to read like a PR rag when it comes to AI, took him at his word, letting his interview his boyfriend’s CEO just weeks after the conflict was revealed.)
Indeed, Roose pretty much repeated Amodei’s latest 2-3 year guess as fact (“powerful AGI is a couple years away”). Neither interview ever asked about the earlier prediction that had already been slipped back by a year, or the long history of overoptimistic predictions in the field from the 1950s through Elon Musk. Who wants to listen to a podcast where a CEO is held to account for what they have said? The days of Roger & Me are over.
(The New York Times’ Ezra Klein also recently largely took industry AGI predictions at face value, as discussed here.)
Any one column or podcast in any particular direction would be fine, but it’s started to be a very consistent trend at the Times.
§
It gets more embarrassing. Yesterday I read something that made my jaw drop. I had already written here a couple days ago about Amodei’s recent outlandish claim that nearly all code would soon be writen by AI. I stressed, for example, there would likely be an ongoing need to have humans to debug, maintain and systems. And I pointed out that even something as easy as Pac-Man was pretty hard to get right with current tools. As I explained, Amodei’s coding claims seemed outright fanciful to me.
Predictably, the industry-leaning Hard Fork crew accepted them hook, line, and sinker, never questioning. (And, their editors seemed not to care.)
But there’s another easy piece of diligence Roose and Newton could have done that they skipped. They could have taken a trip over to Anthropic’s job listings (as Newton’s boyfriend did not so long ago, when he applied for and got a job there).
Here (courtesy software develop Catalin Pit) iswhat Newton and Roose would have found — had they bothered: loads of listings for people who code, such as software engineers, researcher engineers, and senior security engineers (not to mention of a pile of managers to oversee them).
For a man who allegedly believes human coders are basically done, Amodei sure has a lot of openings for coders.
Anthropic needs to answer some hard questions here.
But, also: podcasters, if they want to grow up to be real journalists, should start asking some hard questions, too.
§
Update: Just after I finished drafting this essay, The New York Times published another “AGI is imminent essay”, second in about a week, this time by Kevin Roose. At this point I have lost respect for the Times. Here is why:
The new essay strayed into outright zealotry, filled with religious-like statements of faith prefaced with the phrase “I believe” (“I believe that over the next decade, powerful A.I. will generate trillions of dollars in economic value”, etc ), maligning all skeptics as immoral (for “giving people a false sense of security“) and apparently not consulting with any. (Also not considered were any of Roose’s own previous serious errors).
Shame on The New York Times for systematically presenting only side of this story, rarely if ever mentioning or going into detail on the views of skeptics such as Emily Bender, Abeba Birhane, Meredith Broussard, Chomba Bupe, Joanna Bryson,, Kate Crawford, Missy Cummings, Ernest Davis, Subbarao Kambhapati, Sasha Luccioni, Brian Merchant, Melanie Mitchell, Margaret Mitchell, Safiya Noble, Joshua Tenenbaum. Carissa Veliz, Ed Zitron, or myself.
This is especially embarrassing given a just published survey of the leading AI society, AAAI, which found that more than 84% of respondents (trained, accomplished academics rather than podcasters and influencers) said that neural networks alone are not enough to get us to AGI (seee also New Scientist’s “AI scientists are sceptical that modern models will lead to AGI”).
You absolutely would not know this reading the opinions expressed by Times
Cheerier Postscript: Hilarious I-couldn’t-make-this-up-if-I-tried coda from Ars Technica, re: some good advice from the AI coding tool Cursor AI (which is partly powered by Anthropic’s AI).
