Re-ttention uses temporal redundancy in diffusion models to enable high sparse attention in visual generation, maintaining quality with minimal computational overhead.
Diffusion Transformers (DiT) have become the de-facto model for generating
high-quality visual content like videos and images. A huge bottleneck is the
attention mechanism where complexity scales quadratically with resolution and
video length. One logical way to lessen this burden is sparse attention, where
only a subset of tokens or patches are included in the calculation. However,
existing techniques fail to preserve visual quality at extremely high sparsity
levels and might even incur non-negligible compute overheads. % To address this
concern, we propose Re-ttention, which implements very high sparse attention
for visual generation models by leveraging the temporal redundancy of Diffusion
Models to overcome the probabilistic normalization shift within the attention
mechanism. Specifically, Re-ttention reshapes attention scores based on the
prior softmax distribution history in order to preserve the visual quality of
the full quadratic attention at very high sparsity levels. % Experimental
results on T2V/T2I models such as CogVideoX and the PixArt DiTs demonstrate
that Re-ttention requires as few as 3.1\% of the tokens during inference,
outperforming contemporary methods like FastDiTAttn, Sparse VideoGen and
MInference. Further, we measure latency to show that our method can attain over
45\% end-to-end % and over 92\% self-attention latency reduction on an H100 GPU
at negligible overhead cost.
Code available online here:
https://github.com/cccrrrccc/Re-ttention{https://github.com/cccrrrccc/Re-ttention}