A company involved in the development of AI tools for content creation, Lightricks, has announced the release of its latest video model, LTXV-13B.
As the name suggests, the new model boasts 13 billion parameters to improve the quality of creators’ videos, even when running on consumer-grade hardware.
This is the first version of LTXV to offer “unsampling” controls that enhance the video quality, plus new “multiscale rendering” techniques that enable a layered approach to faster and more realistic scene generation. It’s also the first Lightricks model to be trained on high-quality visual content provided by Shutterstock and Getty, licensed via partnerships with the companies.
13B is integrated into Lightricks’s AI storyboarding and video creation web app LTX Studio, and will be open-sourced, available to use for free via GitHub and Hugging Face.
Accelerating open-source video AI
The release of LTXV-13B builds on from the company’s two-billion-parameter video-generation model. When LTX Video debuted in November, it was one of the first open-source models of its kind, challenging the proprietary AI video generation model makers like OpenAI, Google and Adobe.
LTXV’s release under an open-source model, combined with its compact architecture, proved to be a gamechanger. It was efficient enough to run on consumer-grade graphics cards like the Nvidia RTX 4090, and claimed to deliver the visual fidelity and motion consistency of professional-grade tools. LTXV was termed the first AI model to generate video faster than real time by eWeek.
LTXV put AI video generation into the hands of a much broader audience, emerging as a favourite for marketing teams and content creators under pressure to create high-quality video assets.
The LTXV-13B adds new capabilities that could increase video quality and remain accessible to end-users. It is just as fast as its predecessor and remains able to run on consumer hardware.
Enhanced scene rendering
One of the most intriguing new features in LTXV-13B is multiscale rendering, which lets users replicate the stage scene construction process typically used in Hollywood productions. The AI starts with a basic layout, then adds more detail and clarity to each frame. During the process, it gradually enhances resolution to refine details.
According to Lightricks, the technique brings two major benefits – increased realism and reduced latency. Rendering speeds are up to 30-times faster than competing video-generation models, the company says.
LTXV-13B has also been enhanced with a number of new open-source contributions, including the Vace Model Inference tools that support reference-to-video editing, plus upsampling controls for reducing the effects of noise and improving granularity on every frame. It also uses the UEfficient Q8 kernel to scale its performance on low-powered devices like laptops.
While the open-source version of LTXV-13B is available on GitHub and Hugging Face, the easiest way for less technically minded users to access the model is via LTX Studio.
LTX Studio is used by marketing teams, advertising studios, and other content creators to transform their ideas into storyboards, pitch decks and, eventually, polished videos, without any of the hassles of traditional studio-based production. Teams can start projects by uploading a script, a basic tech prompt, reference images or a sketch-to-shot workflow, where users start off with a rough sketch and iterate rapidly by adding AI-generated details.
By eliminating the need for full-scale production teams, location shoots, physical studios, and traditional storyboarding, LTX Studio can provide benefits for marketing teams, giving them time to explore more ideas and accelerate video creation. LTX Studio recently won a Digiday award for the best ad tech innovation of 2024.
Multi-model app
LTXV-13B is not the only model available in the platform. Last month, Lightricks announced that it is to integrate Google’s video model Veo 2 with the app. Support for the popular Flux model was added in November, letting users generate reference stills.
Users can switch between multiple models for each project to compare outputs. There is a growing number of models available in LTX Studio and several specialised editing tools, like keyframe editing, camera motion control, character and scene-level motion adjustment, multi-shot sequencing, and editing. The platform lets users refine their video productions beyond what’s possible with the standalone LTXV-13B model.
Lightricks Co-founder and CEO Zeev Farbman says the launch of LTXV-13B is a pivotal moment for the nascent AI video generation industry. “Our users can now create content with more consistency, better quality and tighter control,” he said in a statement.