If you use Google to search the internet these days, there’s a chance you won’t be served a list of links or news articles. Instead, you may be met with a chatbot’s attempts to answer your query directly on the page. This shift, which the tech giant started rolling out last year, has been subtle but significant. By some measures, Google accounts for over 90 per cent of global web traffic. These changes mean far fewer Google users are clicking through to the original sources the search engine trawls because the information they’re seeking is being delivered directly by AI.
The consequences for the internet as we know it will be profound. For news publishers, they have already been profound. Industry executives are quietly sounding the alarm about “Google Zero,” a term popularized by The Verge’s Nilay Patel to describe the moment when Google stops sending any referral traffic to third-party sites. In conversations with other publishers, I’m consistently hearing about the steep traffic declines they’re already experiencing, particularly over the past quarter.
The numbers bear this out. According to a new report from traffic-monitoring site Similarweb, since the launch of Google’s AI Overviews in May 2024, the proportion of news-related searches that don’t result in any click-throughs to publisher websites has jumped to nearly 69 per cent from 56 per cent. Unsurprisingly, organic traffic—the term for visitors who arrive at a website via search rather than via paid advertising or other such referral methods—has also fallen sharply, dropping from over 2.3 billion visits at its peak in mid-2024 to under 1.7 billion in May 2025.
I’m not an alarmist. I’ve built my career on embracing innovation and adapting to shifts in how people consume news. In fact, The Logic could be one outlet that ultimately benefits from this change because our original journalism stands apart from the commoditized information that chatbots mostly tend to regurgitate.
But the combination of a sudden, dramatic shift in user behaviour and a lack of meaningful intellectual property protections for original work is pushing journalism into dangerous territory. Without discovery and copyright, there is no viable business model for our craft. If we are reduced to being an API that feeds large language models (LLMs), no amount of licensing revenue will cover the costs of producing high-quality journalism that is written, edited, fact-checked and published by people whose jobs depend on earning and maintaining your trust.
You may wonder why publishers don’t simply block Google from crawling their sites to feed its AI tools. The tech giant has made it clear that if a publisher does so, it will stop indexing that publisher’s content for traditional search, rendering their site effectively invisible to the overwhelming majority of internet users. How can you sustain a news business, or build a new one, if prospective readers can’t find your work?
For publishers, trying to block AI crawlers is a game of whack-a-mole. Even if you successfully block them, your journalism can still be fed into LLMs via third-party platforms. For example, people regularly paste full, paywalled articles into Reddit threads. Reddit, which has a licensing deal with OpenAI, thus becomes a back door for scraped content, regardless of a publisher’s protections. The same goes for LinkedIn and other social media posts and newsletters and aggregators that quote excerpts of other news publishers’ work. Once that content is in the wild, it becomes fuel for the models.
To its credit, Google has made efforts to soften the blow by enhancing its Discover product, which has surpassed search as the largest driver of Google traffic for some publishers. And it is not acting in isolation. Its hand has been forced by competitors like OpenAI, Perplexity and Canada’s own Cohere, which are rapidly gaining market share in the AI arms race. Investors are demanding that Google fight to preserve its dominant position in the market.
Last week, a group of European and U.K. publishers asked regulators to step in, arguing that Google’s AI Overviews are taking their content without permission and harming their business. They want the ability to opt out of letting Google use their work in its AI results without disappearing from regular search.
Here in Canada, a group of news publishers, including CBC/Radio-Canada, Postmedia, Toronto Star publisher Torstar, The Globe and Mail and The Canadian Press, have sued OpenAI claiming copyright infringement. OpenAI has signed licensing deals with publishers in other countries, but has yet to do so with Canadian publishers. The Toronto Star, as part of a group of major publishers that includes Vox Media and Condé Nast, is also suing Cohere in the U.S. (The Logic is not a party to either lawsuit.)
The Online News Act, which has at least created a means through which big online platforms offer Canadian publishers some compensation, wasn’t designed to regulate AI scraping or to reckon with how quickly the technology is reshaping the internet.
Litigation and licensing deals may buy time for a few organizations, but they are not scalable or sustainable strategies. What’s urgently needed is a fundamental rethinking of how journalism is valued, accessed and supported. This will require co-ordination between publishers, platforms, policymakers and, yes, the public. Historically, our industry has struggled with collective action. But we no longer have the luxury of delay. Jon Slade, the new CEO of the Financial Times (one of The Logic’s investors), has called for a NATO for news. I support that idea.
At a minimum, tech platforms should clearly attribute any information their AI tools draw from journalistic work. These tools should cite the original reporting as it is written to avoid hallucinations or misrepresentations that undermine journalistic accuracy. Transparent sourcing will not only encourage the public to trust the accuracy of these tools but also help sustain the ecosystem that produces reliable information in the first place.
I say all this as someone who uses AI tools every day. I’ve seen firsthand how AI can enhance productivity and accelerate research. The potential of this technology is real, and I believe it can play a role in producing great journalism if developed and deployed responsibly. The Logic’s own AI policy strikes a careful balance between making use of the technology’s power without compromising our readers’ trust. We’ve also been reporting on this space for years, including in our book Superintelligence, which explores how Canada is shaping the global AI race.
Despite the scale of this transformation, I remain optimistic. Whether via symbols etched on cave walls, scrolls carried through Roman streets or telegraphs dispatched from the front lines of war, people have always needed reliable information so they can make decisions about how to feed their families, avoid danger, stay connected and prosper.
In my work with the late Clayton Christensen, we described this as journalism’s “job-to-be-done.” Technologies may change, but that human need persists. Reliable news is a core pillar of a functioning society.
We will find a way to get our journalism to you, even if it is by carrier pigeon. But ultimately, AI companies need to realize that the reliability and utility of their platforms will suffer if they lose access to accurate, timely information from trusted sources. If they don’t address this soon, they may ultimately kill their own golden goose.