Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

OM1’s PhenOM® Foundation AI Surpasses One Billion Years of Health History in Model Training

Globus Medical in 2025: 6 updates

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Amazon AWS AI
    • Anthropic (Claude)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • Cohere
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Advanced AI News
Home » Microsoft CTO Kevin Scott on the birth of the agentic web
AI Search

Microsoft CTO Kevin Scott on the birth of the agentic web

Advanced AI BotBy Advanced AI BotMay 19, 2025No Comments53 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Today, I’m talking with Kevin Scott, the chief technology officer of Microsoft and one of the company’s AI leaders. This is Kevin’s third time on Decoder, and he’s one of my favorite guests. He thinks a lot about the relationship between technology, art, and culture, and he’s unusually willing to dive into the weeds of it all, which obviously I can never resist.

Kevin joined the show today during Microsoft’s Build conference to talk about the future of search — the company just announced an open-source tool for websites to integrate AI-powered natural language search with just a little bit of effort, in a way that lets those site owners actually run whatever models they want and keep control of their data. It’s neat stuff — I saw some demos before Kevin and I chatted, and the improvement over the bad local search on most sites was obvious.

But the goal here isn’t just to improve the local search feature on a bunch of disconnected websites. It’s to rethink how search even works in a world where AI is more broadly distributed. Think about it this way: right now, building a search engine requires you to go and index all the pages on the internet — and then keep that index constantly updated, which is an enormous recurring cost.

Listen to Decoder, a show hosted by The Verge’s Nilay Patel about big ideas — and other problems. Subscribe here!

That cost is why there are really only two main search indexes: Google’s, obviously, and Microsoft’s Bing index, which powers most of the alternative search engines like DuckDuckGo that you might be familiar with. Those centralized search indexes are also the underpinnings of our current AI search tools, like the search built into ChatGPT and Bing, or Google’s AI Overviews.

But if all those websites suddenly have their own powerful natural language search tools, well, you might not need that big central index. All you really need is a standard that lets you ask a bunch of websites if they can answer your questions, which would dramatically bring down the cost of search overall and maybe let more competitors into the market. So Microsoft’s local search project is built on such a standard — it’s called Model Context Protocol, or MCP, and it’s what allows AI agents to interact with databases and services in a controlled way — not literally clicking around websites, which is what a lot of agentic products do right now.

MCP was initially developed by Anthropic. Now, the rest of the industry, including Google, is starting to support it. There’s a long way to go, but the first step is just enabling MCP on more sites, which is why Microsoft is making MCP-powered local search cheap and easy to implement.

It’s very cool to think about what the future of MCP-powered agentic search might look like. Maybe there will be more competition, or maybe websites will get more traffic and some of these businesses will be more sustainable. But there are some obvious complexities — starting with why anyone would want agents to use their services in this way and how anyone intends to make money doing it. I asked Kevin about this, and we spent some time thinking about how the future of the web stays sustainable for all the people actually making content out there.

I could talk about that forever, but since I had the time with Kevin I also made sure to ask about Microsoft’s unique and often confusing relationship with OpenAI, how he’s feeling about AI’s capabilities scaling up over time, and whether his thoughts on AI as a creative tool have been evolving as lawsuits and controversies play out in the creative community. Kevin is an author himself, after all — he wrote a book about AI back in 2020 with a foreword by a relative unknown at the time named JD Vance.

So I asked Kevin: how would he feel if someone stood outside a bookstore and just summarized his book for anyone who seemed interested in buying it. I think you’ll find that back and forth pretty interesting.

Okay: Microsoft CTO Kevin Scott. Here we go.

This interview has been lightly edited for length and clarity.

Kevin Scott, you are the chief technology officer at Microsoft. Welcome back to Decoder.

Thank you so much for having me.

If you count our early pilots of Decoder on The Vergecast feed, you are, I believe, our first four-time guest.

And you’re always one of my favorite conversations, so I’m excited for this one. Microsoft has some news about search and the web, which is just in the strike zone for me on the show because I see that changing so enormously. I want to talk to you about what we’ve learned now that AI has entered its second era here, and I want to talk to you about where it’s going, just the usual stuff, small potatoes.

Let’s start with the news. Microsoft, just today as people are listening to this show, announced onstage at the Build conference a new approach to searching websites locally. Tell us what’s going on there.

I would actually frame it a little bit less as search. And what I’ve been spending a bunch of time thinking about is we have this hypothesis, and I think it keeps getting born out that you’re going to have agentic software enabled by all of these powerful new AI models that have been built over the past handful of years, and agents need to be able to do stuff on behalf of users. We’re less constrained, and you sort of called it AI’s second act, which is like the way we talk about it internally as we describe where we are right now as the middle innings. And so we’re not reasoning [in a] constrained [way] anymore. We’re sort of utility constrained, I think, in these agentic pieces of software that we’re trying to build…

Wait, utility constrained, there’s a lot of ways to unpack what that means. Maybe the harshest is “it doesn’t quite work yet.” Maybe the less harsh one is “there’s no product-market fit yet.” What do you think that means?

I think it kind of depends. If you look at software development there’s clearly product- market fit. This stuff has become just an indispensable way that people are building software like these software development agents that we built. And there, I think it’s just sort of the early prospector on the frontier of what agentic software can do. Unsurprising, as developers usually build things to make their lives easier before they build things to make everybody else’s life nominally easier. So we are a little bit further ahead there, and some of the things that we’re learning about how we’re going to need to make general-purpose agents more useful for things other than software development we’ve discovered in building these software engineer agents. One of those things [we’ve learned] is agents need to be able to access sources of information; they need to be able to take action on behalf of users by making changes in state and systems.

Things like booking a hotel room or put something on my calendar. And I think the way that you really want all of these things to happen is just sort of open protocols where you have real interoperability across the whole landscape of agents, where you can have everybody who has a service or a piece of content that they want to make agent accessible has a way to say, “Hey, here I am.” And then agents, no matter who’s building them, have a way to connect to that content and those services. The way that we talked about it onstage at Build is the agentic web. So what must exist in this world where we have lots and lots of agents doing things on behalf of users that are the moral equivalent of the things that had to emerge when the web was under development 20, 30 years ago.

So there’s the agentic web. We’ve been talking a lot about that on the show. I talked to your colleague Mustafa Suleyman about the agentic web and building agents. I’ve been calling it the DoorDash problem. I don’t have a better name for it. I feel bad for the people at DoorDash because I haven’t asked for their permission, but I keep calling it the DoorDash problem where: Okay, I want to get a sandwich. So I talk to Bing or ChatGPT or whoever and I say, “Go get me a sandwich. Alexa, go get a sandwich.” And then it goes out onto the web.

Right now, most of the agentic products will literally open a website and try to scan the website and then click around on the website and then order me a sandwich. And most of those companies are like, “Don’t do that.” Their posture is, “We don’t want you to do that. We’re going to block you and maybe if you’re small enough, we’ll let you do it, but we need to have business terms that make it so that you can just take our capability and put it in your product in this way.”

That problem has to be solved. I’m curious about how you would solve that problem. It sounds like you’re operating at just one step of abstraction beyond that, which is, assuming we solve the business problem, how can we make it so my agent can talk to DoorDash a much easier problem to solve, because clicking around its website has never seemed like a good solution.

Yeah, it is brittle and look, I think actually solving the business-model problem goes hand in hand with solving the technology problem. So it’s not just about figuring out a technical way to do something, it’s about getting all of the incentives in the ecosystem aligned the right way where good things are happening for everyone. So if you have a business and you want your business to be able to transact with users via their agent, that has to make good business sense in order for you to be willing for that to happen at all. You can’t just hack your way around that and expect it to be a durable thing. Even if you can temporarily figure out some kind of technical magic to get around the brittleness of the actual technology, you also have to get rid of the brittleness in the business model.

That is the piece that on the web right now seems most under threat, the underlying business dynamics of I start a website, I put in a bunch of schema that allows search engines to read my website and surface my content across different distributions. I might add an RSS feed, which is a standardized distribution that everyone uses and agrees on. There’s lots of ways to do this.

But I make a website, I open myself up to distribution on different surfaces. What I will get in return for that is not necessarily money — almost in every case, not money. What I’ll get is visitors to my website, and then I’ll monetize them however I choose to: selling a subscription, display ads, whatever it is. That’s broken, right? As more and more of the answers appear directly, particularly in AI-based search products, traffic to websites has generally dropped. We see this over and over again. What’s going to replace that in the agentic era, where we’ve created new schema for agents to come and talk to my website and receive some answers? What’s going to make that worth it?

I think one of the things that we are trying to do right now with some of the things that we’re announcing and that we’re trying to do in an open way is you will have technical mechanisms for agents to be able to access people’s websites, but the protocols themselves will allow you to decide what it is you want to make available and how. And so if you just look at MCP, it’s a super awesome protocol that Anthropic developed that we’re doing a whole bunch of work with Anthropic to support, and I know a ton of people in the ecosystem. So OpenAI is working with them and a bunch of folks have latched on to MCP as the moral equivalent of HTTP for the agentic web. MCP doesn’t have an opinion one way or the other about what a content or a service provider ought to make available via MCP or what the business model ought to be for that access.

And so I think one of the nice things about that is it gives people who have content or services a way to decide what the new business models will look like. So is it that an MCP endpoint is usable inside of your agent if the user has a subscription to your website? Is there going to be some kind of new advertising model where you give away some stuff for free and you use that to drive a bunch of agent activity onto your website and maybe there’s some advertising that goes on that helps with the distribution and then there are transactions that are being made where you can sort of price by conversion. I don’t know exactly what the business model is going to be, but I do know that the thing that you really are going to want to have is agency on the part of the content and service provider so that they get to decide what they make available and what the business model is for the things that they’re doing.

And MCP stands for “model context protocol”?

That is a nascent standard, I would say. You show up to my website or something, my service, and I tell you what you can do. It’s very much the evolution of robots.txt. from what I gather. It’s more complicated, it’s more sophisticated, but the idea is very much the same. You show up and I tell you what you can do and you can’t do. Can Microsoft and OpenAI and Anthropic just horsepower this into existence, or is there some upside for everyone else to participate?

Well, I think if you’re a developer, there’s a ton of upside. This reminds me a lot of how things felt when I was a younger developer, when the internet was exploding into existence a few decades back, where what I wanted was a set of permissionless mechanisms where I could just go put something up on the web and then I could have other people access it in interesting ways.

And so, I do think that MCP and the thing that we are doing that maybe we’ll talk about in a minute on top of MCP — this interesting thing called [Natural Language Web], which is a set of open protocols and a bunch of code that let you, without having to seek permission from anyone, decide what you want to make available on this agentic web so that things that you’re doing are accessible by agents. And I think you’re right when you called it a nascent protocol. The thing that’s interesting is just how fast the ramp is right now, just how quickly everybody is snapping to this as a way to make your things agent accessible. And so what’s attractive I think if you are a content or service provider is that more and more of the user activity gets anchored in agents.

That’s going to be more and more of where user transactions and user attention gets funneled. And so you’re just going to want to be agent visible in what you’re doing so that you have access to an audience. And I think you really start to get into this mode where agents are doing things asynchronously for you. A lot of what happens right now in the current web model is everything happens synchronously. So you’re sort of sitting there like, “I’m staring at a browser right now. I may have a tool that I want to go buy on somebody’s Shopify storefront. My attention is focused on this particular task. I complete the transaction and then I move on to the next thing.” The interesting thing with agents is things are going to start happening asynchronously where you’re going to give an agent a task and it’s going to go do all of this stuff while your attention is elsewhere.

So that’s a super interesting thing where I think there’s going to be a bunch of opportunities that don’t exist right now because in the limit, I only have so much of my attention that I can spend on websites. If I had a bunch of agents off doing a bunch of research for me and helping me think about my summer vacation — or I’ve got this crazy project I’m doing in my shop, building a kiln for all this random pottery stuff I’m doing — if it can help me get a little bit further along so that when I’ve got my attention left to give, I can go take action immediately or maybe buy stuff or the attention [I do give] is just much higher quality. I think that’s interesting as hell for folks who are trying to do business on the web.

Let’s talk about the NL search project that you’re doing and how it connects to this larger vision. So I saw a brief demo of it. It was very cool, right? It’s a low-cost, very easy way to integrate natural anguage search into a website. One of the demos I saw was Tripadvisor. I was told that the Tripadvisor team looked at it on a Wednesday and was demoing it the next Tuesday to their company leadership. That’s cool, low-cost, it runs with all the models. You can run it with DeepSeek, you can run it with OpenAI’s 4o mini. That’s what gives you the MCP capability, right?

So you’ve run this tool with your website, you’ve exposed a benefit to your users. Here’s some natural-language search as expressed in a chatbot or custom interface if you want to build one. But then now you’ve added this MCP schema to your website that lets a Microsoft Copilot agent show up and interact with your website in some structured way that you can control. All of that is very cool. And I understand how the incentives line up. Isn’t that just a bunch of APIs by a different name? There’s just a part of me that makes that very reductive and very small.

Yeah, I think that’s actually not a bad thing. It’s actually a super good thing that it’s really a simple set of protocols that are enabling a bunch of super rich behavior that, again, going back to the fundamental premise, you want agents to be very, very useful and live up to their name. So an agent ought to be a piece of software to which you can delegate increasingly complicated tasks over time. In order for those tasks to become increasingly complicated, the agents have to be able to do work. And the best way for them to be able to do work is to have ubiquity of content and services available; you need everyone’s incentives, both hurdles to adoption and the business model, and just for the economics of everything to make sense, where you get fairly broad adoption. So simplicity is a definite feature.

Let me ask you about the elephant in the room when it comes to the web and building new capabilities on the web. It’s obviously Google. Right now the web is organized around Google’s priorities, its needs, its traffic whims. There’s an entire class of SEO consultants who wave sparklers at the sky to collect traffic for you. It’s great, we love it.

That’s obviously changing, right? As Google keeps more of the traffic for itself or it thinks differently about training data, all this stuff is changing. The trade here is make your website more agentic, and then MCP as a protocol will allow you to build some new business models on it. The problem, as I see it, is that the traffic to the web is in precipitous decline as Google referrals go into decline. How do you fix that problem so that everyone is incentivized to keep building on the web?

I don’t know, honestly. One of the ways that I could answer is, I have, just so that I can see for myself, I’ve set up a Shopify storefront. So I have a tiny little business that I operate on the side just so I can feel what it’s like to be a web-based business owner.

It’s very interesting. The dance that you have to do to get traffic driven to your business and the amount of energy you have to spend tending that traffic stream. Funny enough, most of my referrals that are coming into my storefront are not from Google. Most of [them come] in through social media and social media advertising, which is really different from what I was expecting. So I don’t know that I’ve gotten any interesting referrals and certainly no conversions coming in through organic search.

It’s good to know that you don’t have a thumb on the scale. That’s really what I was asking.

Yeah, yeah, yeah. [Laughs] I certainly don’t. In fact, I think most people at Microsoft are hearing for the first time that Kevin Scott has a Shopify storefront. So yeah, definitely do not have any kind of thumb on the scale.

I think the thing that I would love as a website operator is, I would love to have a way not to spend so much of my time worried about traffic referrals, period. I would love to spend more of my time building up an authentic relationship with people who might be interested in my products and services and have a way to curate that relationship. I would love for there to be a thing like in a web where people could buy my products and learn more about the services that I have to offer, and then have a way where they become loyal customers that I actually have a relationship with, the same way that I would have if I had a physical storefront and I had foot traffic coming into my store. And so I see all of this stuff, for me at least, as a way to have a little bit more of that dynamic than what you have when you’re playing some very abstract game trying to do SEO.

Social media at least feels to me a little bit more like what I just described, where I’ve done a very deliberate job trying to curate a social media audience to just have followers who might be interested in the things that I am doing as a maker, for instance. So, I don’t … that doesn’t answer your question at all, I know. And I know that a bunch of people have very different business objectives being on the web than just wanting to be a little small storefront.

You know what’s interesting about that? I’ve asked a lot of people over the past several years, “Why would anybody start a website?” And the frame for me is when we started The Verge, the only thing we were ever going to start was a website. We were a bunch of people who wanted to talk about technology, so in 2011 we were going to start a website. We weren’t even going to start a YouTube channel. That came later after people started doing YouTube channels at scale. At the time that we started, it was “you’re going to start a big website.”

Now in 2025, I think, Okay, if I had 11 friends who wanted to start a technology product with me, we would start a TikTok. There’s no chance we would be like, we have to set up a giant website and have all these dependencies. We would start a YouTube channel, and I’ve asked people, “Why would anyone start a website now?” And the answer almost universally is to do e-commerce. It’s to do transactions outside of platform rules or platform taxes. It’s to send people somewhere else to validate that you are a commercial entity of some kind, and then do a transaction, and that is the point of the web.

The other point of the web, as far as I can tell, is that it has become the dominant application platform on desktop. And whether that’s expressed through Electron or whether it’s expressed through the actual web itself in a browser, it’s the application layer. And so I get why you’d want to say, “Okay, we’re going to do agents. They’re going to traverse the open application layer that exists and use those tools.” I still am lost at like, well, if I just want to communicate to people, I’m going to go to some closed platform, and then we just kind of enter a place where even the AI tools just have less information to work with, because everybody talking about what to buy might be on TikTok, and then all the stuff to buy is on the web. And that’s the loop that I can’t quite close.

I think this is one of the things that could potentially happen with things like MCP and NLWeb. If the way that people want to do research or they want to transact business is via their agents, and that’s where the intent lives and where the user desire originates, then you’re going to want to have some mechanism where you can connect to that.

So let’s say you and your 11 friends in 2025 are going to start a TikTok channel to talk about tech. If one of the things that you’re doing there is doing a bunch of reviews about tech products or tech sites and you want to reach an audience and the audience is sitting inside of Copilot or ChatGPT or something like that, you’re going to want those agentic pieces of software to have some way where they can reach into your media channel so that you’re exposing that audience to what you’re putting out there.

And NLWeb might be a good way to do that, where maybe you’re not offering up everything, but you’re offering … kind of like what’s happened with search, like teasers, like snippets, things where it is like, Okay, you ask your agent, “Hey, I’m trying to buy a new phone. Here’s kind of what I think I want. Can you go find me some sources of information about this?’” And if you have from your TikTok channel a way to let your agent know what your content is, maybe that’s the referral traffic back into TikTok from the agents like, “Hey, go watch this video. This is super interesting.”

I’m really curious to see if the big platforms enable themselves to be searched or acted on by agents in the way that they somewhat had to allow themselves to be searched by the big search engines, right?

There wasn’t a choice. I think maybe the biggest platform that recused itself from search for a minute was Facebook, but Instagram is still searchable. Right? There was a trade where you wanted to be exposed and to be found on these tools, so everyone sort of opened up. And the dynamics of how you will open up to agents, I think, for a variety of reasons, many of which make sense, are just not clear. Why would we do this when we could build our own agent? We’re still in those early stages.

I don’t know the answer to that question.

To put it in search terms, are you going to do vertical search or big horizontal search? Horizontal search sort of totally won.

It’s hard to say what exactly might happen here. I think it will largely be decided by users. One of the things that’s going to happen is users will just decide what it is they’re going to tolerate. So if using an agent to help you sort out your life and what you’re doing becomes such a huge preference that people have, things that aren’t connected to the agent will kind of turn invisible for folks. You’ll just sort of think, “Oh, well, X isn’t reachable by my agent, maybe X is kind of broken and I’ll find another way to do that.” And I think the thing that you want as the market kind of figures out what it wants is you want as many open protocols as humanly possible so that people can make those late-binding decisions about whether this is what the users have chosen. They’re expressing their preferences. At least have this thing be open so that I can opt in to it when the preference is clear.

I’m dying to know how this plays out. I can see a bunch of sites like Tripadvisor and others that would really want this kind of distribution. Obviously, we need to build the front-end tools, the aggregators that say, “Okay, here’s your agent that’s going to go out.” Do you think having looked at the agents that have been demoed so far or announced and then not shipped or announced and shipped to five people, do you think that this is a necessity? Do you think that initiating MCP across the web like this is a necessary condition for agentic systems to operate? Because none of them work so far.

I think something like this is really kind of necessary. I mean, I remember back when I was working in the early days of mobile on advertising, and the reason that I worked on mobile advertising is I wanted to figure out a way to help people who were building mobile apps and services to figure out distribution and to figure out how they monetize themselves. Before things like AdMob came into existence, the only way that you could get distribution was you went and cut a BD deal with a mobile phone company, and it would decide whether or not it was going to give you placement — at the time it was WML [wireless markup language], and it was placement on itsir deck. And it was kind of a barbaric arrangement. It made perfect sense back then, but if you look at what happened with the technology and fast-forward through time, it’s like, yeah, why would anyone choose that?

And so I think there’s a little bit of that dynamic right now, where people are absolutely finding utility in these agents, even as constrained as they are right now. And so in places like software development, where you need to have a much narrower range of things that the agents can actuate, where you’ve built some completeness there, like, Oh my god, the adoption is great and people love what these things are doing, and there’s a ton of competition, and it’s just really transforming how software development is working.

So what I think we’re going to see — and this is me, Kevin Scott, the optimist — is if you had a real complete agentic web where MCP was sort of the lingua franca of this agentic web, kind of like HTTP was, like everybody can just of stand up an HTTP server and start serving HTML and they get to decide what the HTML payload is. You’re going to see this really interesting organic unfolding of what’s possible, and it’ll probably just sort of, I don’t know what the moral equivalent of the Amazon.com or the early winners in the web are, where when you get enough of that plumbing hooked up that things are going to be super useful. But I think that some of this protocol stuff has to happen before you get to full utility. It’s why MCP is interesting. We think about NLWeb as kind of like the HTML layer, so it is a thing that lets you not have to go do a really tremendous amount of low-level work in order to be able to plumb your stuff up to the agentic web.

The parallel that comes to mind here is Apple’s attempt to build an agentic Siri, which is built on a framework in Apple’s operating system called App Intents, which allows iOS apps to expose themselves to Siri in some way and lets Siri take actions inside those apps. There are rough parallels here — obviously MCP is a more open standard; it’s more nascent. App Intents has run into the same business-model problems: if you’re an app developer in iOS, why would you let Siri use the app and not the user, so you can upsell them or sell them an in-app subscription? That’s one parallel.

The other parallel is Alexa Plus, which I joked earlier has launched to some people, but no one knows who they are. Google has some agentic ideas. There’s computer use from Anthropic, OpenAI. None of it’s worked yet. Have you seen anything that says this is definitely going to work?

I get the… No. So yeah, to answer your question very specifically, outside of software engineering and outside of demos, I haven’t concretely seen the thing that works, and I’ll just sort of ground it even more. So if I look at my everyday life, and I’m sort of looking at how I use these things, outside of software development, there’s just not an awful lot that I’m choosing, that Kevin Scott is choosing, to go delegate a bunch of stuff to this agent to do on my behalf. But I can kind of smell it with MCP, and I do think it actually has to be open. I think it’s kind of hard to do this in a vertically integrated way.

The other question I want to ask, I’m really putting the media training to use here. I’m going to ask you about Google. Google had an opportunity to achieve some of the success it did because Microsoft was in the throes of antitrust pressure, right? Microsoft bundling Internet Explorer, putting pressure on Netscape — that led to whatever amount of legal trouble. Google was able to swoop in, it was able to put Chrome on Windows, and it created the application layer. Everybody has this story. The antitrust pressure on Microsoft really created the opportunities for Google to succeed.

Here we are. It’s many years later, decades later. There’s a lot of antitrust pressure on Google, particularly in regard to how it controls the web, both in the advertising layer and in the search layer. There’s the suggestion that the government will make Google divest Chrome. This is a lot of antitrust distraction, and here I am talking to a Microsoft executive about new ideas for the web and new standards for the web. Do you see the opportunity as the same, that there’s space because Google is being distracted?

I think the opportunity is the moment that we’re in — the technology itself is just ready for some of this stuff to happen. This demo that Guha gave you couldn’t have been done two or three years ago because the technology wasn’t ready. It would’ve been just impossibly difficult to do, and there was no way that Tripadvisor could see a thing on a Tuesday and be demoing it on its own data on a Wednesday. That is entirely a function of technology maturity. So I don’t know what’s going to happen with anything that the government is doing with any of the other tech companies.

But I think part of what’s happening right now is you just have a new set of technologies that are capable of a new set of things, and you’ve got a bunch of big-tech companies and small entrepreneurs who see the possibility in the thing. And the thing that I want is to see as much energy in this ecosystem becoming more mature, as much as is humanly possible. And again, my pattern matching goes back to the last time that I was a happy young developer, which was when the internet was emerging — just that sensation that you can have when a bunch of hard things became easy and a bunch of the protocols opened, and you don’t have to ask anyone’s permission to go try something wild. That’s when interesting things happen.

Let me just put a little bit of pressure on this then. I want to broaden out and just talk about AI generally. If you had showed up and said, “Okay, here’s a new standard for accessing the web and structuring websites from Microsoft” two years ago, everyone would’ve said, “Great, we’re going to wait for Google’s riff on this,” or “for Google to adopt the standard.” Google’s under a lot of pressure. A lot of trust has been erased from Google. You now have the opportunity for OpenAI and Anthropic and Microsoft to show up with a new standard and to believe that there might be real adoption here, and that Google can’t show up with its own standard tomorrow and take the wind out of your sails. That has to be true for you, right? You feel that.

Because the comparison I would make is, I don’t know, in the late ’90s or early 2000s someone would announce a new standard, and Microsoft would show up with a proprietary Windows riff on that standard and the other thing would go away, and that was part of the problem. Do you see that reflection right now?

I don’t know. Sometimes I am trying to be evasive. I’m not trying to be evasive here. Sometimes it feels to me, as an engineer, that certain things are technically inevitable. I’ve had a bunch of conversations with folks about MCP inside Microsoft where it’s like, Ah, this isn’t exactly what we would’ve chosen. And I’m like, yeah, but it kind of doesn’t matter. Sometimes there is a real problem in an ecosystem where the simplest solution that everybody can choose to adopt is the winner because we all win because ubiquity is the thing that really matters, and it feels like we’ve got a bunch of those opportunities right now.

And so the thing that I think that’s really beneficial is some of them have become such simple things where you actually don’t need a multitrillion dollar company to go do an enormous amount of work to create the conditions for adoption to happen quickly. In some sense, with MCP and NLWeb, you actually don’t need a big-tech company pushing for it. We’re just a voice out here saying, “Hey, here’s this interesting thing. It’s open. Go do with it what you will,” and that’s all I can do. I don’t have the ability in terms of open protocols to tell anybody to do anything. We will shine a light on it and hope good things will happen.

Let’s talk about the AI industry broadly. You described it as middle innings. I’m describing it as act two. This technology exists. Everyone has used it. We’ve all played with a chatbot. Some reporters have had the chatbot ask them to leave their wives. I will never stop making this joke. Just broadly, what did you get right in your initial bets and what did you get wrong? What surprised you?

I think we accurately spotted the scaling-law trends with the reasoning power of foundation models. I think we’ve been more right than wrong in our conviction that those trends will continue to play out. I think we still have a tremendous amount of progress to make in increasing the reasoning power of models, and I don’t want to make light of how difficult it will actually be to continue the scaling. But they seem like a set of reasonably solvable problems if you’ve got the right resources and focus.

I think the thing that’s hard right now is I feel as if we’ve got this capability overhang with models where the models are actually capable of a lot more than what they’re being used for. And so, even inside of Microsoft, I maybe overestimated how quickly people were going to just lean all the way into the platform capabilities of the basic AI models. So I think we’re a little bit behind right now on product. And by we, I don’t mean Microsoft. I mean everybody, aside from this rapid progress we’re seeing in software development tools. So I think there are things in healthcare that could be a lot better than they are right now. A bunch of things are inhibited by some basic plumbing stuff, which is the topic of this particular conversation. But a lot of it is we just need a lot more companies to get created and a lot more products to get made just to use what’s already possible with these models.

And a little bit of it too is, I have this conversation over and over and over again. I was at a gathering with developers late last week and there’s this interesting conservatism that is especially unhelpful with exponentially improving platforms. And it wouldn’t even look like conservatism if you didn’t have the exponentially improving platform. It’s like somebody will look at a thing and say, “Ah, this is a little bit too expensive for me to use for this particular thing” or”for a particular problem I’m trying to solve,” or it’s marginally useful right now so it gets things right about 30 percent of the time, but that’s only marginally useful. And then it’s, “Okay, I’m just going to pause and wait.”

And that may be the right thing to do, except for the wait part of the pause, as the wait is too long in many cases right now. The next time people go in and sample to see if it’s gotten cheaper or more capable, it’s already raced past where it needed to be and then you’re just too late in trying to get your thing to market. And so I think that’s a thing that I’m seeing over and over again where we collectively are making mistakes, where our pattern matching isn’t as good as it could be.

Right. You’re saying that you should envision the products, even if they’re not a hundred percent great yet.

It’s interesting you mentioned that because you were one of the architects of the OpenAI relationship at Microsoft. A couple of years ago you were on the show. I asked you about that relationship and where it came from, and you described that relationship in terms of platforms. That Microsoft was a platform company, obviously Azure is a massive platform. And you said, “OpenAI is aligned with you on the platform vision, and we wanted to structure a partnership together so we could go build the platform together.”

Things have changed in two years, I would say. The companies have pulled apart a little bit, maybe a lot. I watched the Senate hearing on AI the other day and I noticed Brad Smith from Microsoft and Sam Altman from OpenAI were at opposite ends of the table. OpenAI has become much more of a consumer company, right? It’s obviously trying to go make big consumer products, not platform products. Anthropic is much more of a platform company, I would say, than OpenAI is today. How do you see that relationship now? Has it fully decoupled? Are you still working together? Are you still trying to build a platform?

I still spend a huge amount of my time on OpenAI things, and there’s a huge amount of technical stuff. Just me as an engineer, we’re building big computing systems together. OpenAI is a gigantic Azure customer. Its workload is really a nontrivial part of our platform, particularly in terms of AI compute. So we are working with OpenAI all the time trying to make sure that we are building things that it needs, and there’s still just a ton of work that we’re doing together across the board, everything from how do we optimize the infrastructure that we’re building to how do we take these models that we’re training and get them optimized so that they can actually become platform components. We still operate a joint deployment safety board where we work to assure that the things that we’re releasing to the public have gone through a rigorous, responsible AI review before they launch. So yeah, just a ton of work that we’re doing together.

If you listen carefully, there’s a qualitative difference from what you said before, right? “OpenAI is our huge Azure customer with a big workload.” Of course, everybody works closely with their biggest customers. Previously it was “we are interdependent and it is their models that are powering every Copilot across the company.” It very much sounds as if Microsoft has moved OpenAI from the category of independent technology partner to a big customer we work with closely.

It is very different from any other big customer that we have. The models that it is training on Azure supercomputers are still very, very important to the things that Microsoft is building. The components that it’s building are important parts of the Azure platform. So it is both a customer and platform-building partner. And look, it has a bunch of things that it’s off trying to do on its own that are independent from us, like ChatGPT for instance. And that’s awesome, because its success with ChatGPT is helping put a bunch of super good pressure on the Azure platform.

And that is another consistent thing. I don’t know whether I talked about this the last time we talked about the OpenAI partnership. But one of my core theses when we were doing the first deal, when was it, five, six years ago now, was that we needed the best AI workloads in the world to be running on Azure so we couldn ensure that Azure was building itself in a world-class way for those AI workloads of the future. And so the more successful ChatGPT is, the better Azure gets.

Speaking of those AI workloads, my colleague Tom Warren has reported that Elon Musk and xAI are preparing to host Grok on Azure. He tells me to ask you if there’s some angst within Microsoft about working with Elon, and whether you can trust that company, especially with these other dependencies. Do you feel that angst?

I’m actually not super plugged into that conversation. I know we’re doing it. The thing that we’re trying to do with the model marketplace on Azure is to make sure that all of the good open-source models that developers want to use are available and easy for them to use. So everything that we can offer there, we do offer.

Do you still control the GPU budget at Microsoft?

This is a thing you said to me several years ago, and I’ve never stopped thinking about it. You don’t anymore?

What happened there? Was it just like this is too much? Because you described it as a horrible job, too.

Oh, it was a horrible job. Yeah, really, really.

Has the pressure on needing GPUs lessened or increased?

No. We still need lots and lots of GPUs.

Because there is also reporting from Reuters and others that Microsoft has slowed down some of its data-center investment or reallocated it as the models have gotten cheaper to run, as things like DeepSeek have shown up.

No. We still are urgently deploying capacity. And the thing that I will say is if you are sitting inside of Microsoft and you are talking to all of your teams that are building AI products or doing AI research, I have seen no lessening because of any technology trend in the desire for more GPUs.

Do you think we can do AGI in the current hardware? This is a thing that is floating around that I keep hearing from people in this industry.

I don’t even know what AGI is, which is a thing I’ve been a little bit confused by since I wrote my book many years ago. I think first you’d have to define what exactly it is you think that means. I think if you look at the current generation of hardware that’s rolling out, we’re getting a big performance win from this next generation that’s deploying right now. And so if you are thinking about what’s going to happen over the next 12 months, it’s going to be a pretty substantial leap forward in performance just across the board of everyone’s systems because this current hardware generation and optimizations you can do on top of it are extraordinary.

Do you think that we’re going to get more capacity because of the optimizations or because the hardware is more powerful?

Well, it is absolutely true that most of the performance wins are coming from optimization, so you get on the order of 2X improvement in price performance for every hardware generation, which is extraordinary by the way. You never got that every 18 months with Moore’s Law. It was a little bit slower than that, and so the hardware advancements here are just breathtaking in terms of how good they are, but the software performance optimizations on top of the hardware are much bigger than that even, so we are just very reliably getting order of magnitude every year or so when you combine those two things.

How would you characterize those optimizations? Because a lot of the early wins in model capability came from just ingesting more data, right? We just made the models bigger and that is how they got smarter.

It’s a bunch of things.There are things that you do depending on how you’re training the models. There’s a whole bunch of things. A lot of the wins have come from being able to effectively use smaller data types to store model activations both on the inference and the training side, which means that you can do a lot more arithmetic in parallel because you have smaller numbers that you are using in your arithmetic operations. I mean, it really is kind of crazy to see the breadth of the optimizations from what you’re doing on the training side to just people fundamentally rewriting the numeric kernels for the inference stack.

And then there’s a bunch of things that you can do using your standard bag of computer science techniques around prompt optimization and caching and using more than one model to service prompts. You don’t always need to send every prompt to the most expensive model. We now have a big enough portfolio of models where you can choose to handle certain things with models that are hyper-performance optimized but less general and send the more complicated things to the bigger, more expensive models. So that’s just almost the same thing as cash optimization.

It’s funny. When I talk to the other agentic AI CEOs, they describe that kind of orchestration as the key, and you’re talking about MCP as the key, and I’m curious which one has to come first. We got pretty far in orchestration.

I think again, going back to this capability overhang, I think we have more reasoning power in these models right now than we’re effectively using, and so my hypothesis is that one of the things that is preventing us from having more useful things is just action taking and the entirety of the action space is too constrained right now, and so I’m not saying it’s an either or. I just think, and it’s going to be super hard work to get that action space opened up, and so we just need to get on it like it’s an ecosystem right now.

I want to end by talking to you about my favorite thing to talk to you about, which is the relationship between technology and art. You did write a book. It’s called Reprogramming the American Dream. The first time we ever talked, it was about that book. Somewhat notably, the foreword was written by JD Vance, who’s now the vice president of the United States. I don’t think you saw that one coming at that time.

You did see a lot of stuff coming about how AI would reshape the economy or at least threaten to reshape the economy. As we talk about the models getting more capable and the ways they’re getting more capable, the very notion that we could make the models more capable by just ingesting more data has hit a limit, right?

We’ve ingested all the data, and now there’s a lot of lawsuits about whether that ingestion of data was legal, whether it should be compensated. You’re an author. I’m going to put these questions to you as simply as I can. If I stood outside a bookstore and stopped everybody who came looking for your book and said, “I can just make you a podcast about that book. You just send me a note, just text me, and I’ll send you a full podcast summarizing that book,” do you think that would increase or decrease sales of your book?

Well, I will say something that’s about Kevin that I don’t know whether you can or should generalize to authors in general. I’d be fine with people doing whatever they want to with the content of my book.

Even if you had to make your living based on sales of the book?

Yeah, which is why it’s super, super different. I think authors, if you spend all of your time and dump your heart and soul into making a thing, you deserve to be compensated for it. Now, I think there’s a lot of different ways to be compensated for a thing, and I think I’m not even following what’s going on with a bunch of this litigation closely enough that even if I could comment on it, it would be useful commentary.

But I think going back to the conversation that we started with, I think one of the really nice things potentially about having open protocols for this agentic web is that people who are making things get more, particularly here at the beginning where the whole landscape hasn’t really sorted out what the business model is, I think people can sort of jump in and have more agency around the business model. And I think it’s just super important for people to think really carefully about that right now.

Did you make a Studio Ghibli meme when ChatGPT released that tool?

You didn’t? Okay. I’m curious. A lot of people did. I’m not saying I didn’t. No comment from me on that. My point is, as somebody who makes creative work here, a maker yourself, there are lots of people whose livelihood depends on economic exchange for their creative work, and their broad criticism of the AI industry is you’ve created all of this capability, maybe more capability than we’re even using as you’re saying, but we have received nothing in return. And we’re in it now, right? Act two, middle innings, and it doesn’t feel as if that has changed but for some litigation. I’m just wondering if your thinking has evolved or matured.

I think the way that I was thinking about this in the beginning is the way that I’m thinking about it right now. So I certainly wouldn’t want to see anything cause Hayao Miyazaki to make less beautiful things. I’m maybe one of the biggest fans in the world of what he and Studio Ghibli have done over the years. I think it’s just some of the most beautiful art that was created in the 20th and early 21st century, and so yeah, people like that. I want them to have every incentive in the world to do more of what they’re doing.

The thing though too with the platform is I will just f come out and say it. I’m not really super interested in these image-generator things. What I’m interested in is a model that can do medical diagnostics for my mom who lives in rural central Virginia and doesn’t have access to really super high-quality healthcare, and there are just tens of millions of people like her in the United States who are in the same predicament.

It doesn’t get better over time, because of the demographics of the United States. Studio Ghibli content has nothing to do with whether or not AI is good at medical diagnosis or not, and so that’s the thing I want to make sure of in this debate, that we can have the part of the debate, which is there’s this creative economy that I don’t want disrupted at all because I’m a fan of, and I appreciate those folks. I was having this conversation with Reid Hoffman and JJ Abrams a few months ago where, if anything, I would love for it to be the case that AI made it easier for people like JJ Abrams to do more of what they do. What I think is you just sort of have more stuff floating out there; that’s what people will want.

They’ll want more of the, “Hey, I’m a fan of JJ Abrams and his voice and his work. Give me more of that, not this random crap like what some teenager has gotten out of an image- generation model.” But that’s a very important debate to have. I don’t want that to overshadow this other thing, which is that these tools can be enormously useful for solving some very important problems. And we don’t want to have this conversation that we’re having over here, which is important to have, to impede our ability to push forward with this other stuff, which is also super important.

It’s interesting, because you are describing what more or less is the framework that the Copyright Office released before Trump fired the librarian of Congress and the register of copyrights, which appears to have backfired. And now there’s an even more copyright maximalist person in that position, because that report said some of these uses for training data are obviously fair use like scholarship and research.

I’ll just read you the quote from the Copyright Office’s preliminary report that came out last week: “Making commercial use of vast troves of copyrighted works to produce expressive content that competes with them in existing markets, especially where this is accomplished through legal access, taking the stuff without permission, goes beyond established various boundaries.”

So there’s a distinction. There’s some domains, like medical imaging, where the utility of this is so high and the work is so transformative that it may be fine. And then there’s some of the stuff where you’ve just copied every YouTube video in the world and you’re just letting people make more YouTube videos, and that is probably not. Can you envision a framework where that would apply to the tools Microsoft is building, where you would say this is stuff we’re going to do and this is stuff we’re not going to do?

I think we’re open to having any kind of sensible conversation. I think you just have to show up and there’s some technical limitations and constraints on what’s possible and what’s not possible with the fundamental technology, but I think there’s a super rich dialogue to be had here. There’s also this interesting thing where I think increasingly, you accurately identified that we’ve kind of exhausted all of the data that is available to train models, and so we’re now in a regime where a bunch of these systems are being trained with a set of techniques where they don’t depend as much on data as they once did, and so there may be all sorts of technical ways to enhance the reasoning capabilities of models that aren’t quite as dependent as they might’ve been at some point on ingesting a bunch of organic tokens of data.

It’s also true, I think we talked about this last time, that there’s an increasingly good understanding of the quality of data, how much a token of data contributes to the reasoning power of the model, and then the biggest thing, my bugaboo in general with all of this stuff is thinking of models as databases, as information retrieval systems, is kind of, you want to talk about a suboptimized system. They’re kind of terrible as databases just from an efficiency perspective, and so again, you go back to things like NLWeb, like here’s a model and it has learned how to reason the same way that you might’ve taught a biological brain how to reason.

And then once you have a certain level of reasoning capability, the interesting thing is prompt by prompt, task by task, what information do you have access to reason over? And how you monetize those two things and what the share of the business is between those two things can be very, very different. With one, for instance, you could have, if you need a model that’s reasoning over breaking news, if you have something like NLWeb and a subscription to a bunch of news outlets, you can provide the agent access to those subscriptions if the publisher wants to allow that using the user’s authorization tokens, and then let the model reason over that information. And you’re sort of paying a subscription fee to have this ephemeral content to reason over. So I think there are all sorts of ways we may be able to sort out the business model stuff over time.

I want to bring this all together. It sounds like with the new search project, NLWeb, and with the investment in MCP and wanting it to be more widespread, it just feels as if you’re trying to create a wholesale architecture shift for the web. This is the new kind of web, and you’re trying to incept and incentivize its creation because the deal of the old web appears to be up. Is that a fair characterization?

I don’t know whether the deal of the old web is up, but it’s kind of time for us to be thinking about some new deal, I think. And I think as we’re all collectively thinking about the new, we should do what every good architect does when they’re thinking about the new. It’s like, what has worked and what hasn’t worked for all of the constituents and stakeholders over recent years, and let’s try to go make something better that works for everyone. And we’ll get to the best outcome when everybody’s incentives are aligned, where the creators and the consumers have their interests balanced and there aren’t a bunch of weird intermediaries constraining how utility and value gets exchanged.

Well, I wish you luck because so far the creators have a very clear point of view on where their incentives are. Kevin, I could talk to you forever, obviously. You’ve got to come back soon. I want to keep an eye on this web project and see how it’s going over time.

Thank you so much for having me.

Questions or comments about this episode? Hit us up at decoder@theverge.com. We really do read every email!

Decoder with Nilay Patel

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleHow Startups Can Trend-Jack The Timeline With Memes
Next Article In race to build Google Chrome rival, why Perplexity’s fresh funding is crucial
Advanced AI Bot
  • Website

Related Posts

AI Blueprint for Video Search and Summarization Now Available to Deploy Video Analytics AI Agents Across Industries

May 19, 2025

SEO lessons from publishers one year after Google’s AI Overviews

May 19, 2025

Google is readying its AI Mode search tool for primetime

May 17, 2025
Leave A Reply Cancel Reply

Latest Posts

Britain’s Lee Broom Turns Everyday Objects Into Design Spectacle

Manhattan DA’s Office Repatriates Eight Artifacts to Peru

Businessman Given 21-Month Sentence for Role in Gold Toilet Theft

10 Ideal European Cities For Cultural Breaks In 2025

Latest Posts

EU Commission: “AI Gigafactories” to strengthen Europe as a business location

May 19, 2025

OM1’s PhenOM® Foundation AI Surpasses One Billion Years of Health History in Model Training

May 19, 2025

Globus Medical in 2025: 6 updates

May 19, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.