AI’s Uneven Impact: A Sober Assessment from Anthropic’s Jack Clark
In an era where tech leaders routinely predict that artificial general intelligence (AGI) will deliver economic growth rates of 20-30%, Jack Clark stands apart with his more measured outlook. The co-founder of AI company Anthropic brings a uniquely humanistic perspective to Silicon Valley, informed by his background in journalism and the humanities.
“I think my base case on all of this is 3% [economic growth] and my bull case is something like 5%,” Clark explained in a recent interview with economist Tyler Cowen on the “Conversations with Tyler” podcast. His conservative estimates stem from firsthand experience repeatedly underestimating AI’s capabilities while simultaneously recognizing the physical world’s resistance to digital transformation.
“We will enter into a world where there will be an incredibly fast-moving high growth part of the economy, but it is a relatively small part of the economy,” Clark said. “Then there are large parts of the economy like healthcare or other things which are naturally slow-moving and maybe slow in adoption of this.”
Clark’s skepticism about AI delivering massive economic growth contrasts sharply with the techno-optimism prevalent in Silicon Valley. He points to self-driving cars as a cautionary tale—a technology that showed promising initial growth before hitting a “grinding slow pace” in scaling up.
“Every time the AI community has tried to cross the chasm from the digital world to the real world, they’ve run into 10,000 problems that they thought were paper cuts but in some add up to you losing all the blood in your body,” Clark observed.
When considering which sectors AGI will affect last, Clark believes the most artisanal elements of trades will remain human-dominated longest. “I think within [trades] you get certain high status, high skill parts where people want to use a certain tradesman not just because of their skill but because of their notoriety and sometimes an aesthetic quality.”
The healthcare sector will likely present the strongest legal obstacles to AI adoption. “Big chunks of healthcare [will resist] because it’s bound up in certain things around how we handle personal data,” Clark noted. “All of those standards are probably going to need to be changed in some form to make them amenable to being used by AI.”
On the political front, Clark anticipates resistance to AI-driven job displacement. “I think there is a high chance for a political movement to arrive which tries to freeze a load of human jobs in bureaucratic amber as a way to deal with the political issues posed by this incredibly powerful technology.”
Looking toward the future, Clark envisions a world where AI agents become increasingly independent. However, this creates complex policy and legal questions about accountability. “If you create agents that are wholly independent from people but are making decisions that affect people, you’ve introduced a really difficult problem for the policy and legal system to deal with,” he said.
Despite his measured economic outlook, Clark remains fascinated by AI’s potential to transform our understanding of consciousness and even enable communication with other species. When asked when humans might be able to speak directly with dolphins through AI translation, he responded: “I think 2030 or sooner. I think that one’s coming soon.”
As AI continues to reshape society, Clark’s humanities-informed perspective offers a valuable counterbalance to the unbridled techno-optimism that often dominates discussions about artificial intelligence’s future impact.