AI video startup, Runway, recently took to X and announced its new Gen-4 series of AI models capable of generating media with just an image as a reference.
“Gen-4 is a significant step forward for fidelity, dynamic motion and controllability in generative media,” the company stated.

According to Runway, the new models set a new standard in video generation and show an improvement over Gen-3 Alpha. “It excels in its ability to generate highly dynamic videos with realistic motion as well as subject, object and style consistency with superior prompt adherence and best-in-class world understanding,” the company wrote on X.
Introducing Runway Gen-4 (Official Video)
Gen-4 is Runway’s first AI model that claims to achieve world consistency. Cristóbal Valenzuela, co-founder and CEO of Runway, stated that users can create consistent worlds with consistent environments, objects, locations, and characters.
Meanwhile, Jamie Umpherson, head of Runway Studios, said, “You can start to tell longer form narrative content. With actual continuity, you can generate the same characters, the same objects, the same locations across different scenarios, so you can block your scenes and tell your stories with intention over and over again.”
Revealing a behind-the-scenes look at the model used to create short films, they explained that users can direct the subject across the scene.
The official research page for Runway Gen-4 highlighted that users can set their preferred look and feel, and the model will work on maintaining the same throughout every frame. Furthermore, they can regenerate the same elements from multiple perspectives and positions within the scenes.
It also stated that the model can come in handy for generating product photography or narrative content.
The video generation model is rolling out to all paid and enterprise customers. Users can find a collection of short films and music videos made with Gen-4 on its behind-the-scenes page.