Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Ooredoo launches AI cloud in Qatar using Nvidia tech

[2507.02103] What Neuroscience Can Teach AI About Learning in Continuously Changing Environments

Designing 3D Printable Robotic Creatures | Two Minute Papers #37

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • Amazon (Titan)
    • Anthropic (Claude 3)
    • Cohere (Command R)
    • Google DeepMind (Gemini)
    • IBM (Watsonx)
    • Inflection AI (Pi)
    • Meta (LLaMA)
    • OpenAI (GPT-4 / GPT-4o)
    • Reka AI
    • xAI (Grok)
    • Adobe Sensi
    • Aleph Alpha
    • Alibaba Cloud (Qwen)
    • Apple Core ML
    • Baidu (ERNIE)
    • ByteDance Doubao
    • C3 AI
    • DataRobot
    • DeepSeek
  • AI Research & Breakthroughs
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Education AI
    • Energy AI
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Media & Entertainment
    • Transportation AI
    • Manufacturing AI
    • Retail AI
    • Agriculture AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
Facebook X (Twitter) Instagram
Advanced AI News
Video Generation

Lightricks just made AI video generation 30x faster — and you won’t need a $10,000 GPU

Advanced AI EditorBy Advanced AI EditorMay 6, 2025No Comments7 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More

Lightricks, the company behind popular creative apps like Facetune and VideoLeap, announced today the release of its most powerful AI video generation model to date. The LTX Video 13-billion-parameter model (LTXV-13B) generates high-quality AI video up to 30 times faster than comparable models while running on consumer-grade hardware rather than expensive enterprise GPUs.

The model introduces “multiscale rendering,” a novel technical approach that dramatically increases efficiency by generating video in progressive layers of detail. This enables creators to produce professional-quality AI videos on standard desktop computers and high-end laptops instead of requiring specialized enterprise equipment.

“The introduction of our 13B parameter LTX Video model marks a pivotal moment in AI video generation with the ability to generate fast, high-quality videos on consumer GPUs,” said Zeev Farbman, co-founder and CEO of Lightricks, in an exclusive interview with VentureBeat. “Our users can now create content with more consistency, better quality, and tighter control.”

How Lightricks democratizes AI video by solving the GPU memory problem

A major challenge for AI video generation has been the enormous computational requirements. Leading models from companies like Runway, Pika, and Luma typically run in the cloud on multiple enterprise-grade GPUs with 80GB or more of VRAM (video memory), making local deployment impractical for most users.

Farbman explained how LTXV-13B addresses this limitation: “The major dividing line between consumer and enterprise GPUs is the amount of VRAM. Nvidia positions their gaming hardware with strict memory limits — the previous generation 3090 and 4090 GPUs maxed out at 24 gigabytes of VRAM, while the newest 5090 reaches 32 gigabytes. Enterprise hardware, by comparison, offers significantly more.”

The new model is designed to operate effectively within these consumer hardware constraints. “The full model, without any quantization, without any approximation, you will be able to run on top consumer GPUs — 3090, 4090, 5090, including their laptop versions,” Farbman noted.

Two AI-generated rabbits, rendered on a single consumer GPU, stride off after a brief glance at the camera — an unedited four-second sample from Lightricks’ new LTXV-13B model. (Credit: Lightricks)

Inside ‘multiscale rendering’: The artist-inspired technique that makes AI video generation 30X faster

The core innovation behind LTXV-13B‘s efficiency is its multiscale rendering approach, which Farbman described as “the biggest technical breakthrough of this release.”

“It allows the model to generate details gradually,” he explained. “You’re starting on the coarse grid, getting a rough approximation of the scene, of the motion of the objects moving, etc. And then the scene is kind of divided into tiles. And every tile is filled with progressively more details.”

This process mirrors how artists approach complex scenes — starting with rough sketches before adding progressively finer details. The advantage for AI is that “your peak amount of VRAM is limited by a tile size, not the final resolution,” Farbman said.

The model also features a more compressed latent space, which requires less memory while maintaining quality. “With videos, you have a higher compression ratio that allows you, while you’re in the latent space, to just take less VRAM,” Farbman added.

Performance metrics showing Lightricks’ LTXV-13B model generating video in just 37.59 seconds, compared to over 1,491 seconds for a competing model on equivalent hardware — a nearly 40× speed improvement. (Credit: Lightricks)

Why Lightricks is betting on open source when AI markets are increasingly closed

While many leading AI models remain behind closed APIs, Lightricks has made LTXV-13B fully open source, available on both Hugging Face and GitHub. This decision comes during a period when open-source AI development has faced challenges from commercial competition.

“A year ago, things were closed, but things are kind of opening up. We’re seeing really a lot of cool LLMs and diffusion models opening up,” Farbman reflected. “I’m more optimistic now than I was half a year ago.”

The open-source strategy also helps accelerate research and improvement. “The main rationality for open-sourcing it is to reduce the cost of your R&D,” Farbman explained. “There are a ton of people in academia that use the model, write papers, and you’re starting to become this curator that understands where the real gold is.”

How Getty and Shutterstock partnerships help solve AI’s copyright challenges

As legal challenges mount against AI companies using scraped training data, Lightricks has secured partnerships with Getty Images and Shutterstock to access licensed content for model training.

“Collecting data for training AI models is still a legal gray area,” Farbman acknowledged. “We have big customers in our enterprise segment that care about this kind of stuff, so we need to make sure we can provide clean models for them.”

These partnerships allow Lightricks to offer a model with reduced legal risk for commercial applications, potentially giving it an advantage in enterprise markets concerned about copyright issues.

The strategic gamble: Why Lightricks offers its advanced AI model free to startups

In an unusual move for the AI industry, Lightricks is offering LTXV-13B free to license for enterprises with under $10 million in annual revenue. This approach aims to build a community of developers and companies who can demonstrate the model’s value before monetization.

“The thinking was that academia is off the hook. These guys can do whatever they want with the model,” Farbman said. “With startups and industry, you want to create win-win situations. I don’t think you can make a ton of money from a community of artists playing with AI stuff.”

For larger companies that find success with the model, Lightricks plans to negotiate licensing agreements similar to how game engines charge successful developers. “Once they hit ten million in revenue, we’re going to come to talk with them about licensing,” Farbman explained.

Despite the advances represented by LTXV-13B, Farbman acknowledges that AI video generation still has limitations. “If we’re honest with ourselves and look at the top models, we’re still far away from Hollywood movies. They’re not there yet,” he said.

However, he sees immediate practical applications in areas like animation, where creative professionals can use AI to handle time-consuming aspects of production. “When you think about production costs of high-end animation, the real creative work, people thinking about key frames and the story, is a small percent of the budget. But key framing is a big resource thing,” Farbman noted.

Looking ahead, Farbman predicts the next frontier will be multimodal video models that integrate different media types in a shared latent space. “It’s going to be music, audio, video, etc. And then things like doing good lip sync will be easier. All these things will disappear. You’re going to have this multimodal model that knows how to operate across all these different modalities.”

LTXV-13B is available now as an open-source release and is being integrated into Lightricks’ creative apps, including its flagship storytelling platform, LTX Studio.

Daily insights on business use cases with VB Daily

If you want to impress your boss, VB Daily has you covered. We give you the inside scoop on what companies are doing with generative AI, from regulatory shifts to practical deployments, so you can share insights for maximum ROI.

Read our Privacy Policy

Thanks for subscribing. Check out more VB newsletters here.

An error occured.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleTesla is using this technique to expand Supercharging outside the U.S.
Next Article Stanford HAI’s annual report highlights rapid adoption and growing accessibility of powerful AI systems
Advanced AI Editor
  • Website

Related Posts

Free Ai video & image generators for creators

July 4, 2025

Baidu’s MuseStreamer AI Video Generation Model Takes on Google’s Veo 3 With Native Audio Support: Report

July 4, 2025

Google Launches Veo 3 AI Video Generator for Gemini Pro Subscribers

July 4, 2025
Leave A Reply Cancel Reply

Latest Posts

Albright College is Selling Its Art Collection to Balance Its Books

Big Three Auction Houses Hold Old Masters Sales in London This Week

MFA Boston Returns Two Works to Kingdom of Benin

Tate’s £150M Endowment Campaign May Include Turbine Hall Naming Rights

Latest Posts

Ooredoo launches AI cloud in Qatar using Nvidia tech

July 4, 2025

[2507.02103] What Neuroscience Can Teach AI About Learning in Continuously Changing Environments

July 4, 2025

Designing 3D Printable Robotic Creatures | Two Minute Papers #37

July 4, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Ooredoo launches AI cloud in Qatar using Nvidia tech
  • [2507.02103] What Neuroscience Can Teach AI About Learning in Continuously Changing Environments
  • Designing 3D Printable Robotic Creatures | Two Minute Papers #37
  • Do Role-Playing Agents Practice What They Preach? Belief-Behavior Consistency in LLM-Based Simulations of Human Trust
  • The EthCC crypto scene at Cannes shows how far Ethereum has come

Recent Comments

No comments to show.

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

YouTube LinkedIn
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.