AI research firm Runway has launched Aleph, a new video-to-video model that allows creators to edit existing footage using simple text prompts.Runway Aleph can add or remove objects, change a scene’s environment, and generate entirely new camera angles.
The release positions the New York-based company as a direct competitor to AI video tools from Adobe, Google, and Meta. The launch comes as Runway partners with Imax to screen AI-generated films and follows reports of a rejected takeover bid from Meta, cementing its growing influence.
What Aleph Brings to the Editing Suite
Runway describes Aleph as “a state-of-the-art in-context video model, setting a new frontier for multi-task visual generation,” designed to perform a wide range of in-context video manipulations. Unlike text-to-video models that create content from scratch, Aleph transforms existing footage, giving filmmakers granular control over their shots. This approach allows creators to work with what they have already filmed, augmenting it with AI rather than replacing it entirely.
The model offers powerful environmental controls, allowing users to change the entire mood of a scene with simple prompts. According to Runway’s announcement, creators can transform the lighting from harsh noon to golden hour, turn a sunny day into a snowy one, or convert a daytime shot into a night scene. The AI naturally adjusts shadows, reflections, and color temperature to maintain realism across the entire frame.
Object manipulation is another core feature. Users can seamlessly add new elements, such as placing products on a table or inserting crowds into an empty street, using either text descriptions or reference images. The tool is equally adept at removing unwanted objects or reflections from a scene. Furthermore, Aleph can retexture or completely replace existing objects, offering a new level of post-production flexibility.
For narrative and cinematic purposes, Aleph provides tools to generate endless coverage from a single shot. Filmmakers can request new camera angles like wide shots, close-ups, or reverse shots, and even ask the model to generate the next logical shot in a sequence. It can also apply the motion from one video to a static image, enabling fine-grained control over camera movement.
Beyond objects and cameras, the model can alter a character’s appearance, such as their age, without requiring complex makeup or costly visual effects. It also supports style transfers, allowing any aesthetic—from anime to watercolor—to be applied to a video. For post-production workflows, Aleph includes a sophisticated green screen tool that can isolate any subject with precise edge detection, preserving fine details like hair strands for seamless compositing.
Runway’s Rising Profile in the AI Arms Race
The launch of Aleph is the latest in a series of strategic moves that have elevated Runway’s industry profile. The company recently announced a landmark partnership with Imax to bring AI-generated films to a commercial audience, a significant step in legitimizing AI as a cinematic medium.
Ten finalist films from Runway’s AI Film Festival will be screened in 10 major U.S. cities from August 17-20. In a statement, Imax Chief Content Officer Jonathan Fischer explained the company’s perspective: “How these tools will shape filmmaking is an area for us to continue to explore while honoring the intent of our creative partners.”
Runway co-founder and CEO Cristóbal Valenzuela added, “The quality, variety and storytelling of these films deserves a premium viewing experience. This partnership will bring AIFF to thousands of moviegoers across America, at the highest possible quality.” The move takes AI filmmaking from niche online communities to the largest screens available.
This rising influence likely explains why Meta reportedly attempted to acquire the startup in a bid that was ultimately rejected. Runway’s decision to remain independent underscores its confidence in its technology and its central role in the future of creative tools.
A Crowded Field: How Aleph Stacks Up Against the Competition
Aleph enters a fiercely competitive market for AI video tools. Big Tech has invested heavily in this space, with each player carving out a different niche. Adobe has focused on integrating its “commercially safe” Firefly AI directly into Premiere Pro, a strategy designed to appeal to enterprises wary of copyright disputes.
Meanwhile, models like OpenAI’s Sora and Google’s Veo 2 have set a high bar for text-to-video realism, creating entire scenes from prompts. Runway’s focus on editing existing footage with Aleph offers a different, more hands-on workflow for filmmakers who want to augment, not just generate, their content.
Netflix recently made headlines by using generative AI for a VFX sequence in its series The Eternaut. Co-CEO Ted Sarandos framed it as a tool for creative enablement, stating it was “an incredible opportunity to help creators make films and series better, not just cheaper.” He claimed “that VFX sequence was completed 10 times faster than it could have been completed with traditional VFX tools and workflows,” a metric that highlights the efficiency studios are chasing.
This push for AI-driven content creation is not without controversy. The industry is grappling with significant legal and ethical questions, particularly around copyright. Disney’s general counsel, Horacio Gutierrez, recently commented on a lawsuit against Midjourney, stating, “piracy is piracy. And the fact that it’s done by an AI company does not make it any less infringing.”
The debate pits the potential for creative innovation against fears of job displacement and intellectual property theft. While some, like filmmaker Darren Aronofsky, believe that “now is the moment to explore these new tools and shape them for the future of storytelling,” many creative unions remain wary. Runway’s Aleph, building on its predecessor Gen-4, now stands at the center of this evolving landscape.