Close Menu
  • Home
  • AI Models
    • DeepSeek
    • xAI
    • OpenAI
    • Meta AI Llama
    • Google DeepMind
    • Amazon AWS AI
    • Microsoft AI
    • Anthropic (Claude)
    • NVIDIA AI
    • IBM WatsonX Granite 3.1
    • Adobe Sensi
    • Hugging Face
    • Alibaba Cloud (Qwen)
    • Baidu (ERNIE)
    • C3 AI
    • DataRobot
    • Mistral AI
    • Moonshot AI (Kimi)
    • Google Gemma
    • xAI
    • Stability AI
    • H20.ai
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Microsoft Research
    • Meta AI Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding & Startups
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • Expert Insights & Videos
    • Google DeepMind
    • Lex Fridman
    • Matt Wolfe AI
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • Matt Wolfe AI
    • The TechLead
    • Andrew Ng
    • OpenAI
  • Expert Blogs
    • François Chollet
    • Gary Marcus
    • IBM
    • Jack Clark
    • Jeremy Howard
    • Melanie Mitchell
    • Andrew Ng
    • Andrej Karpathy
    • Sebastian Ruder
    • Rachel Thomas
    • IBM
  • AI Policy & Ethics
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
    • EFF AI
    • European Commission AI
    • Partnership on AI
    • Stanford HAI Policy
    • Mozilla Foundation AI
    • Future of Life Institute
    • Center for AI Safety
    • World Economic Forum AI
  • AI Tools & Product Releases
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
    • Image Generation
    • Video Generation
    • Writing Tools
    • AI for Recruitment
    • Voice/Audio Generation
  • Industry Applications
    • Finance AI
    • Healthcare AI
    • Legal AI
    • Manufacturing AI
    • Media & Entertainment
    • Transportation AI
    • Education AI
    • Retail AI
    • Agriculture AI
    • Energy AI
  • AI Art & Entertainment
    • AI Art News Blog
    • Artvy Blog » AI Art Blog
    • Weird Wonderful AI Art Blog
    • The Chainsaw » AI Art
    • Artvy Blog » AI Art Blog
What's Hot

Paper page – Chat with AI: The Surprising Turn of Real-time Video Communication from Human to AI

China is turning AI into a commodity – Charles Ormond

MIT researchers warn of ‘PACMAN’ M1 flaw that can’t be patched

Facebook X (Twitter) Instagram
Advanced AI News
  • Home
  • AI Models
    • OpenAI (GPT-4 / GPT-4o)
    • Anthropic (Claude 3)
    • Google DeepMind (Gemini)
    • Meta (LLaMA)
    • Cohere (Command R)
    • Amazon (Titan)
    • IBM (Watsonx)
    • Inflection AI (Pi)
  • AI Research
    • Allen Institue for AI
    • arXiv AI
    • Berkeley AI Research
    • CMU AI
    • Google Research
    • Meta AI Research
    • Microsoft Research
    • OpenAI Research
    • Stanford HAI
    • MIT CSAIL
    • Harvard AI
  • AI Funding
    • AI Funding Database
    • CBInsights AI
    • Crunchbase AI
    • Data Robot Blog
    • TechCrunch AI
    • VentureBeat AI
    • The Information AI
    • Sifted AI
    • WIRED AI
    • Fortune AI
    • PitchBook
    • TechRepublic
    • SiliconANGLE – Big Data
    • MIT News
    • Data Robot Blog
  • AI Experts
    • Google DeepMind
    • Lex Fridman
    • Meta AI Llama
    • Yannic Kilcher
    • Two Minute Papers
    • AI Explained
    • TheAIEdge
    • The TechLead
    • Matt Wolfe AI
    • Andrew Ng
    • OpenAI
    • Expert Blogs
      • François Chollet
      • Gary Marcus
      • IBM
      • Jack Clark
      • Jeremy Howard
      • Melanie Mitchell
      • Andrew Ng
      • Andrej Karpathy
      • Sebastian Ruder
      • Rachel Thomas
      • IBM
  • AI Tools
    • AI Assistants
    • AI for Recruitment
    • AI Search
    • Coding Assistants
    • Customer Service AI
  • AI Policy
    • ACLU AI
    • AI Now Institute
    • Center for AI Safety
  • Industry AI
    • Finance AI
    • Healthcare AI
    • Education AI
    • Energy AI
    • Legal AI
LinkedIn Instagram YouTube Threads X (Twitter)
Advanced AI News
Video Generation

Stanford Researchers Propose FramePack: A Compression-based AI Framework to Tackle Drifting and Forgetting in Long-Sequence Video Generation Using Efficient Context Management and Sampling

By Advanced AI EditorApril 21, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest Copy Link Telegram LinkedIn Tumblr Email
Share
Facebook Twitter LinkedIn Pinterest Email


Video generation, a branch of computer vision and machine learning, focuses on creating sequences of images that simulate motion and visual realism over time. It requires models to maintain coherence across frames, capture temporal dynamics, and generate new visuals conditioned on prior frames or inputs. This domain has seen rapid advances, especially with the integration of DL techniques such as diffusion models and transformers. These models have empowered systems to produce increasingly longer and higher-quality video sequences. However, generating coherent frames across extended sequences remains computationally intensive and prone to degradation in quality due to issues like memory limitations and accumulated prediction errors.

A major challenge in video generation is maintaining visual consistency while minimizing computational overhead. As frames are generated sequentially, any error in earlier frames tends to propagate, leading to noticeable visual drift in longer sequences. Simultaneously, models struggle to retain memory of initial frames, causing inconsistencies in motion and structure, often referred to as the forgetting problem. Efforts to address one issue tend to worsen the other. Increasing memory depth enhances temporal coherence but also accelerates the spread of errors. Reducing dependence on prior frames helps curb error accumulation but increases the likelihood of inconsistency. Balancing these conflicting requirements is a fundamental obstacle in next-frame prediction tasks.

Various techniques have emerged to mitigate forgetting and drifting. Noise scheduling and augmentation methods adjust the input conditions to modulate the influence of past frames, as seen in frameworks like DiffusionForcing and RollingDiffusion. Anchor-based planning methods and guidance using history frames have also been tested. Also, a range of architectures aim to improve efficiency, linear and sparse attention mechanisms, low-bit computations, and distillation approaches help reduce resource demands. Long video generation frameworks like Phenaki, NUWA-XL, and StreamingT2V introduce structural changes or novel generation paradigms to extend temporal coherence. Despite these innovations, the field still lacks a unified and computationally efficient approach that can reliably balance memory and error control.

Researchers at Stanford University introduced a new architecture called FramePack to address these interlinked challenges. This structure hierarchically compresses input frames based on their temporal importance, ensuring that recent frames receive higher fidelity representation while older ones are progressively downsampled. By doing so, the method maintains a fixed transformer context length regardless of the video’s duration. This effectively removes the context length bottleneck and allows for efficient scaling without exponential growth in computation. In parallel, FramePack incorporates anti-drifting sampling techniques that utilize bi-directional context by generating anchor frames first, particularly the beginning and end of a sequence, before interpolating the in-between content. Another variant even reverses the generation order, starting from the last known high-quality frame and working backward. This inverted sampling proves particularly effective in scenarios such as image-to-video generation, where a static image is used to generate a full motion sequence.

The FramePack design is built around a prioritized compression system that limits the transformer’s total context length. In standard video diffusion models like Hunyuan or Wan, each 480p frame generates approximately 1560 tokens of context. When predicting the next frame using a Diffusion Transformer (DiT), the total context length increases linearly with the number of input and output frames. For example, with 100 input frames and one predicted frame, the context length could exceed 157,000 tokens, which becomes computationally impractical.

FramePack addresses this by applying a progressive compression schedule based on frame importance. More recent frames are considered more relevant and are allocated higher resolution, while older frames are increasingly downsampled. The compression follows a geometric progression controlled by a parameter, typically set to 2, which reduces the context length for each earlier frame by half. For instance, the most recent frame may use full resolution, the next one half, then a quarter, and so on. This design ensures that the total context length stays within a fixed limit, no matter how many frames are input.

Compression is implemented using 3D patchifying kernels, such as (2, 4, 4), (4, 8, 8), and (8, 16, 16), which control how frames are broken into smaller patches before processing. These kernels are trained with independent parameters to stabilize learning. For cases where the input sequence is extremely long, low-importance tail frames are either dropped, minimally included, or globally pooled to avoid unnecessary overhead. This allows FramePack to manage videos of arbitrary length efficiently while maintaining high model performance.

Performance metrics confirm the practical value of FramePack. When integrated into pretrained diffusion models like HunyuanVideo and Wan, FramePack reduced the memory usage per step while enabling larger batch sizes, up to the scale commonly used in image diffusion training. The anti-drifting techniques substantially improved visual quality. By reducing the diffusion scheduler’s aggressiveness and balancing the shift timesteps, the models showed fewer artifacts and greater frame-to-frame coherence. The inverted sampling approach, particularly, resulted in better approximation of known frames, enabling high-fidelity generation when a target image is known. These improvements occurred without additional training from scratch, demonstrating the adaptability of the FramePack module as a plug-in enhancement to existing architectures.

This research thoroughly examines and addresses the core difficulties of next-frame video generation. The researchers developed FramePack, an approach that applies progressive input compression and modified sampling strategies to ensure scalable, high-quality video generation. Through fixed context lengths, adaptive patchifying, and innovative sampling order, FramePack succeeds in preserving both memory and visual clarity over long sequences. Its modular integration into pretrained models highlights its practical utility and future potential across varied video generation applications.

Several Key Takeaways from the Research on Framepack include:

FramePack ensures a fixed transformer context length, allowing models to scale to longer video sequences without increased computational cost.  

Uses a geometric progression (λ = 2) to compress earlier frames, significantly reducing the context length even for large numbers of input frames.  

Applies 3D patchify kernels like (2, 4, 4), (4, 8, 8), and (8, 16, 16), each trained with independent parameters to ensure stable learning.  

Anti-drifting sampling methods leverage bi-directional context and early endpoint generation, improving overall video quality.  

Inverted temporal sampling excels in image-to-video generation tasks by anchoring on high-quality user input frames.  

Enables image-diffusion scale batch sizes in training, leading to efficient learning and higher throughput.  

Integrates with existing models like HunyuanVideo and Wan without requiring full retraining.  

Provides multiple tail-handling strategies (e.g., global pooling, minimal inclusion), showing negligible impact on visual fidelity.

Check out the Paper and GitHub Page. Also, don’t forget to follow us on Twitter and join our Telegram Channel and LinkedIn Group. Don’t Forget to join our 90k+ ML SubReddit.

🔥 [Register Now] miniCON Virtual Conference on AGENTIC AI: FREE REGISTRATION + Certificate of Attendance + 4 Hour Short Event (May 21, 9 am- 1 pm PST) + Hands on Workshop

Sana Hassan, a consulting intern at Marktechpost and dual-degree student at IIT Madras, is passionate about applying technology and AI to address real-world challenges. With a keen interest in solving practical problems, he brings a fresh perspective to the intersection of AI and real-life solutions.



Source link

Follow on Google News Follow on Flipboard
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
Previous ArticleEnter the era of physical intelligence at the 2025 Robotics Summit
Next Article Stanford HAI’s annual report highlights rapid adoption and growing accessibility of powerful AI systems
Advanced AI Editor
  • Website

Related Posts

Higgsfield’s Latest AI Tool Wants To Help You Steal Any Image On The Internet

July 27, 2025

Midjourney brings AI video generation to Discord, and now you can make them loop seamlessly

July 25, 2025

YouTube reportedly rolls out AI video tools as the platform inches closer to $40B ad revenue

July 25, 2025
Leave A Reply

Latest Posts

David Geffen Sued By Estranged Husband for Breach of Contract

Auction House Will Sell Egyptian Artifact Despite Concern From Experts

Anish Kapoor Lists New York Apartment for $17.75 M.

Street Fighter 6 Community Rocked by AI Art Controversy

Latest Posts

Paper page – Chat with AI: The Surprising Turn of Real-time Video Communication from Human to AI

July 28, 2025

China is turning AI into a commodity – Charles Ormond

July 28, 2025

MIT researchers warn of ‘PACMAN’ M1 flaw that can’t be patched

July 28, 2025

Subscribe to News

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

Recent Posts

  • Paper page – Chat with AI: The Surprising Turn of Real-time Video Communication from Human to AI
  • China is turning AI into a commodity – Charles Ormond
  • MIT researchers warn of ‘PACMAN’ M1 flaw that can’t be patched
  • Chinese AI firms form alliances to build domestic ecosystem amid US curbs
  • AI startup Cohere in talks to raise funding at $6B plus valuation

Recent Comments

  1. binance推薦獎金 on [2407.11104] Exploring the Potentials and Challenges of Deep Generative Models in Product Design Conception
  2. психолог онлайн индивидуально on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  3. GeraldDes on 1-800-CHAT-GPT—12 Days of OpenAI: Day 10
  4. binance sign up on Inclusion Strategies in Workplace | Recruiting News Network
  5. Rejestracja on Online Education – How I Make My Videos

Welcome to Advanced AI News—your ultimate destination for the latest advancements, insights, and breakthroughs in artificial intelligence.

At Advanced AI News, we are passionate about keeping you informed on the cutting edge of AI technology, from groundbreaking research to emerging startups, expert insights, and real-world applications. Our mission is to deliver high-quality, up-to-date, and insightful content that empowers AI enthusiasts, professionals, and businesses to stay ahead in this fast-evolving field.

Subscribe to Updates

Subscribe to our newsletter and never miss our latest news

Subscribe my Newsletter for New Posts & tips Let's stay updated!

LinkedIn Instagram YouTube Threads X (Twitter)
  • Home
  • About Us
  • Advertise With Us
  • Contact Us
  • DMCA
  • Privacy Policy
  • Terms & Conditions
© 2025 advancedainews. Designed by advancedainews.

Type above and press Enter to search. Press Esc to cancel.