Energy-Based Transformers (EBTs) generalize System 2 Thinking to arbitrary modalities and problem types using a scalable, unsupervised energy-based optimization framework that combines verification, uncertainty modeling, and dynamic compute allocation.
Unified System 2 Thinking via Energy-Based Optimization: EBTs treat inference as iterative energy minimization over a learned verifier function, enabling dynamic computation, uncertainty modeling, and explicit prediction verification across both discrete and continuous modalities, entirely from unsupervised pretraining.
Scalable Transformer-Based EBM Architecture: EBTs implement autoregressive (GPT-style) and bidirectional (BERT/DiT-style) Transformer variants, achieving superior pretraining scaling across parameters, depth, data, batch size, and FLOPs—surpassing the Transformer++ recipe.
Inference-Time Thinking via Gradient Descent and Best-of-N Sampling: EBTs support reasoning-like behavior at inference using two methods: more gradient descent steps (“thinking longer”) and selecting the lowest-energy prediction from multiple candidates (“self-verification”), both yielding significant gains, especially on out-of-distribution data.