OpenAI co-founder Andrej Karpathy has said that Veo 3 represents a shift in how video is generated, consumed, and optimised.
Veo 3 is Google’s latest video generation model that can also create background sounds like traffic, nature sounds, and character dialogue, a feature not currently available in OpenAI’s Sora, Meta’s Movie Gen, Runway ML’s Gen-4, Pika Labs, or Stability AI’s Stable Video 4D 2.0.
Sharing his thoughts on X, Karpathy noted that the quality of content improves significantly when audio is added and that the broader implications of models like Veo 3 may not be fully appreciated yet.
He explained that video is a high-bandwidth medium for the brain, which is used not just for entertainment, but also for work and learning. According to him, the average person finds video more accessible than reading or writing, and the barrier to creating video content is approaching zero.
The most important shift, in his view, is that video is now directly optimisable. He wrote that until now, platforms like TikTok relied on ranking and serving a finite set of videos created by humans.
This involved “human creators learning what people like and then ranking algorithms learning how to best show a video to a person”, which he described as “a very poor optimiser”.
In contrast, models like Veo 3 produce video through a neural network, making the process differentiable. “You can now take arbitrary objectives, and crush them with gradient descent,” he said.
This means engagement metrics like ad clicks or even pupil dilation could be used to guide video generation directly. Even without changing the model parameters, simply refining prompts, either by humans or AIs, could act as a powerful optimisation loop.
Karpathy questioned why platforms should rely on a fixed library of videos when they can generate unlimited ones and tune them in real time. He said video could become a core interface for AI-to-human communication and future graphical interfaces, pointing out that diagrams and animations often make concepts easier to grasp than text.
He concluded with a warning. While this direction opens up new creative and functional possibilities, he said, “I’m not so sure that we will like what ‘optimal’ looks like.