A distributed inference strategy, DualParal, is proposed to address high processing latency and memory costs in diffusion transformer-based video diffusion models by parallelizing frames and layers across GPUs with a block-wise denoising scheme and feature cache.
Diffusion Transformer (DiT)-based video diffusion models generate
high-quality videos at scale but incur prohibitive processing latency and
memory costs for long videos. To address this, we propose a novel distributed
inference strategy, termed DualParal. The core idea is that, instead of
generating an entire video on a single GPU, we parallelize both temporal frames
and model layers across GPUs. However, a naive implementation of this division
faces a key limitation: since diffusion models require synchronized noise
levels across frames, this implementation leads to the serialization of
original parallelisms. We leverage a block-wise denoising scheme to handle
this. Namely, we process a sequence of frame blocks through the pipeline with
progressively decreasing noise levels. Each GPU handles a specific block and
layer subset while passing previous results to the next GPU, enabling
asynchronous computation and communication. To further optimize performance, we
incorporate two key enhancements. Firstly, a feature cache is implemented on
each GPU to store and reuse features from the prior block as context,
minimizing inter-GPU communication and redundant computation. Secondly, we
employ a coordinated noise initialization strategy, ensuring globally
consistent temporal dynamics by sharing initial noise patterns across GPUs
without extra resource costs. Together, these enable fast, artifact-free, and
infinitely long video generation. Applied to the latest diffusion transformer
video generator, our method efficiently produces 1,025-frame videos with up to
6.54times lower latency and 1.48times lower memory cost on 8timesRTX
4090 GPUs.