Anthropic co-founder and Head of Policy Jack Clark offers a look at how Silicon Valley AI leaders are thinking about the future of AI.
To explain the moment AI systems develop situational awareness, Clark turns to an unusual metaphor: “It is as if you are making hammers in a hammer factory and one day the hammer that comes off the line says, ‘I am a hammer, how interesting!’ This is very unusual!” For Clark, these unexpected moments emerge when you scale up compute, data, and model size to a certain threshold, revealing abilities no one intentionally built in.
“This technology really is more akin to something grown than something made,” he says. “You combine the right initial conditions and you stick a scaffold in the ground and out grows something of complexity you could not have possibly hoped to design yourself.”
Clark doesn’t mince words about the stakes: “We are growing extremely powerful systems that we do not fully understand.” He adds, “And just to raise the stakes, in this game, you are guaranteed to lose if you believe the creature isn’t real. Your only chance of winning is seeing it for what it is.”
Ad
Of course, Clark’s view is far from universally accepted. Many in the field are skeptical that technologies like large language models could ever develop anything close to real self-awareness. While Clark warns about the emergence of abilities no one intended or fully understands, plenty of researchers argue that these systems, no matter how complex, are still just highly advanced pattern matchers, not conscious entities.
Before Anthropic, Clark was Policy Director at OpenAI. Earlier, he covered distributed systems, quantum computing, and AI as a technology journalist for outlets like Bloomberg BusinessWeek and The Register.