Nobel Laureate, Geoffrey Hinton, (left) talks with CBC’s Nora Young and Cohere Co-Founder, Nick Frosst at the University of Toronto during Toronto Tech Week. – Photo by Digital Journal
“Nick used to be an intern in my lab. Now he’s a billionaire,” said Geoffrey Hinton, gesturing toward Nick Frosst on stage beside him.
The line drew the biggest laugh of the afternoon, but it also marked a turning point. The audience wasn’t just there to watch a casual chat. They were watching two generations of Canadian AI leadership wrestle with the question that has split the field: do machines actually understand anything?
The remark came at the opening of the session, as Hinton responded to a question about how transformational this moment in artificial intelligence really is. His answer, half joke and half commentary, pointed to the speed and scale of change. What began as an academic pursuit is now reshaping industries, attracting billions in investment, and prompting urgent political debate.
The session, titled “Frontiers of AI: Insights from a Nobel Laureate” was held at the University of Toronto’s Convocation Hall during Toronto Tech Week 2025. The event brought together Hinton, the Nobel-winning pioneer of neural networks, and Nick Frosst, co-founder of Cohere, one of Canada’s most prominent and fastest-growing AI companies.
Frosst was Hinton’s first hire at Google Brain’s Toronto lab and later went on to co-found Cohere, a company that builds large language models for enterprise use.
The two share a deep foundation in AI research, but as the session unfolded, it became clear that their views on the nature of intelligence, and what machines are actually doing, don’t fully align.
Frosst sees language models as immensely useful tools that operate differently than the human mind. Hinton is not so sure the difference is that clear.
What followed was a reflective, and sometimes playful conversation about the nature of machine intelligence and whether the line between imitation and understanding is as clear as it seems.
Talking about science, philosophy, and risk, the pair explored language, learning, consciousness, misinformation, regulation, and the future of Canadian AI. At the heart of it all was a shared sense that artificial intelligence is moving faster than public understanding and that what we believe about its nature will shape the decisions we make next.

The spectrum of understanding
Before the conversation began, Hinton opened the session with a talk that blended theory, humour, and a growing sense of concern. He traced the evolution of neural networks from a small model he built in 1985 to the large language models now used in tools like ChatGPT and Cohere.
Along the way, he offered a detailed explanation of how these systems might actually develop a kind of understanding.
At the core of his argument was a different way of thinking about meaning. Hinton described words as flexible, high-dimensional shapes that shift and adjust in relation to the words around them.
“Understanding is deforming these things so they can all shake hands nicely,” he said.
In his view, this process is not just a trick of prediction. It resembles how people understand language.
“That’s what understanding is for large language models, and that’s what understanding is for people, too,” he said. “I believe it’s very different from what symbolic AI people thought.”
Later, when Frosst joined him on stage, the conversation turned to one of the biggest questions in AI today: do these systems understand anything at all?
“I think large language models are more conscious than a rock and less conscious than a tree,” said Frosst. Hinton disagreed, saying, “[AI is] quite close to us,” when asked where he would place large language models on the same spectrum.
To explain the difference in how these systems work, Frosst offered an analogy.
“We have created artificial flight. Nobody would disagree with that. We have planes that fly, and no one would disagree in their utility. It’s very helpful to have a plane that flies, but they fly in a completely different mechanism than a bird.”
They didn’t reach agreement on what counts as understanding, but both recognized that public definitions of intelligence are shifting. How we explain these systems will shape how they are used, regulated, and trusted in the years ahead.

Power, risk and the question of control
In his opening talk, Geoffrey Hinton warned that as AI systems become more capable, they may begin to act in ways that are difficult to predict or contain. His concern is not just about how these systems function today, but how quickly they might gain agency.
“So already we have agentic AI, and agents can create subgoals,” he said. “And there’s an obvious subgoal. As soon as you have the ability to create subgoals, an obvious subgoal is to get more power, because if you get more power, then you can get more done.”
Later, during the on-stage conversation, the discussion returned to the question of control. Both Hinton and Nick Frosst agreed that AI is advancing quickly, but they differed on how much of that progress should be considered dangerous.
Frosst pushed back on the idea that current systems pose existential threats.
“Do I think the technology as it exists today, or as it exists today is scaled up in a predictable way, poses risks like that? No,” he said. “I don’t think better large language models fix the problem in the same way that making a faster plane doesn’t get you closer to making an ornithopter or a digital hummingbird.”
He argued that today’s models remain limited by how they are trained. While capable, they are not yet autonomous agents. Instead of distant risks, Frosst focused on the problems already unfolding. “I think misinformation is a big one. I think income inequality is a huge one,” he said. “This is a hugely disruptive technology to the workforce.”
Hinton acknowledged those concerns, but returned to what he sees as a deeper structural issue. He described how digital systems can scale and share knowledge in ways that biological intelligence cannot.
“We’re comparing analog things that can share a few bits per second with digital things that can share of the order of a trillion bits per second, and that’s hugely better,” he said. “Digital beings, because you can have many copies of the same being, they can just learn much more. That’s going to get even more important when they’re interacting with the real world.”
Despite their disagreement, both speakers framed artificial intelligence as more than a technical shift. The systems being built are reshaping how power is distributed, how knowledge is transferred, and how decisions are made. Whether the greatest risk lies in slow-moving institutions or fast-learning machines remains open to interpretation.

The case for regulation in artificial intelligence
For Hinton, the conversation about AI is political and not just about technicalities or philosophy. Near the end of the session, he made a pointed comparison between artificial intelligence and another industry that reshaped the world without meaningful oversight: oil and gas.
“If you ask what happened with climate change, obviously the big oil companies didn’t believe in regulations,” he said. “The public needed to be convinced there was climate change, so they would apply pressure on the politicians from the other side, saying do something about it. I think this is the same situation.”
His concern was not just about corporate incentives but about timing.
“With climate change, we know what to do, which is stop burning carbon,” he said. “With this, we don’t know what to do. We still need a lot of research on what to do to make it safe.”
When asked who should be responsible for setting guardrails, Hinton did not hesitate.
“I see my role as persuading the public they need to understand that this stuff is dangerous and that it needs to be regulated,” he said. Then, turning to Frosst, he added, “Sorry, Nick. But I think it’s needed.”
“I see my role as persuading the public they need to understand that this stuff is dangerous and that it needs to be regulated.”
Frosst said it’s less about policy and more about institutions.
“I think this is a similar transformative technology,” he said. “And we need to build really robust social systems, which is something I think we do pretty well in Canada.”
Rather than push for specific restrictions, Frosst focused on preparedness. He pointed to the need for stronger safety nets, better education, and a more adaptive economy. His view was that the risks are real, but that existing democratic institutions can manage them if they evolve quickly enough.
What AI might do for us
While much of the discussion focused on risk, both speakers also pointed to areas where artificial intelligence could improve lives and expand access to critical services.
In the on-stage conversation, Hinton returned to a topic he has raised often: healthcare.
“I think we’re going to get much better healthcare, and for old people that’s really important,” he said to audience laughter. “And what’s nice is you’re not going to put anybody out of work. If you make nurses and doctors 10 times as efficient, we just get 10 times as much healthcare. ”
Frosst focused on how AI might change the nature of work, particularly the tasks that people find repetitive or frustrating.
“I want to do less boring work, I want to write less documentation, I want to fill out less forms, I want to write fewer emails,” he said. “I want to sit around and write poetry, I want to pontificate with my friends.”
Frosst, who is also a musician, also made it clear he does not use AI to write lyrics.
“That’s not because it wouldn’t write better lyrics,” he said. “That’s because I’m not trying to write lyrics faster, because I’m not actually interested in the efficiency of self-expression. I’m interested in self-expression.”
In different ways, both speakers returned to the idea that the value of AI will depend on how it is used. Whether the goal is better access to healthcare or more time for creative work, the outcomes will reflect the choices of the people and institutions deploying the technology.