View a PDF of the paper titled Pretrained Reversible Generation as Unsupervised Visual Representation Learning, by Rongkun Xue and 6 other authors
View PDF
HTML (experimental)
Abstract:Recent generative models based on score matching and flow matching have significantly advanced generation tasks, but their potential in discriminative tasks remains underexplored. Previous approaches, such as generative classifiers, have not fully leveraged the capabilities of these models for discriminative tasks due to their intricate designs. We propose Pretrained Reversible Generation (PRG), which extracts unsupervised representations by reversing the generative process of a pretrained continuous generation model. PRG effectively reuses unsupervised generative models, leveraging their high capacity to serve as robust and generalizable feature extractors for downstream tasks. This framework enables the flexible selection of feature hierarchies tailored to specific downstream tasks. Our method consistently outperforms prior approaches across multiple benchmarks, achieving state-of-the-art performance among generative model based methods, including 78% top-1 accuracy on ImageNet at a resolution of 64*64. Extensive ablation studies, including out-of-distribution evaluations, further validate the effectiveness of our approach. Code is available at this https URL.
Submission history
From: Rongkun Xue [view email]
[v1]
Fri, 29 Nov 2024 08:24:49 UTC (34,281 KB)
[v2]
Sat, 8 Mar 2025 14:13:46 UTC (39,965 KB)
[v3]
Thu, 26 Jun 2025 04:26:18 UTC (23,446 KB)