#dreamer #deeprl #reinforcementlearning
Model-Based Reinforcement Learning has been lagging behind Model-Free RL on Atari, especially among single-GPU algorithms. This collaboration between Google AI, DeepMind, and the University of Toronto (UofT) pushes world models to the next level. The main contribution is a learned latent state consisting of one discrete part and one stochastic part, whereby the stochastic part is a set of 32 categorical variables, each with 32 possible values. The world model can freely decide how it wants to use these variables to represent the input, but is tasked with the prediction of future observations and rewards. This procedure gives rise to an informative latent representation and in a second step, reinforcement learning (A2C Actor-Critic) can be done purely – and very efficiently – on the basis of the world-model’s latent states. No observations needed! This paper combines this with straight-through estimators, KL balancing, and many other tricks to achieve state-of-the-art single-GPU performance in Atari.
OUTLINE:
0:00 – Intro & Overview
4:50 – Short Recap of Reinforcement Learning
6:05 – Problems with Model-Free Reinforcement Learning
10:40 – How World Models Help
12:05 – World Model Learner Architecture
16:50 – Deterministic & Stochastic Hidden States
18:50 – Latent Categorical Variables
22:00 – Categorical Variables and Multi-Modality
23:20 – Sampling & Stochastic State Prediction
30:55 – Actor-Critic Learning in Dream Space
32:05 – The Incompleteness of Learned World Models
34:15 – How General is this Algorithm?
37:25 – World Model Loss Function
39:20 – KL Balancing
40:35 – Actor-Critic Loss Function
41:45 – Straight-Through Estimators for Sampling Backpropagation
46:25 – Experimental Results
52:00 – Where Does It Fail?
54:25 – Conclusion
Paper:
Code:
Author Blog:
Google AI Blog:
ERRATA (from the authors):
– KL balancing (prior vs posterior within the KL) is different from beta VAEs (reconstruction vs KL)
– The vectors of categoricals can in theory represent 32^32 different images so their capacity is quite large
Abstract:
Intelligent agents need to generalize from past experience to achieve goals in complex environments. World models facilitate such generalization and allow learning behaviors from imagined outcomes to increase sample-efficiency. While learning world models from image inputs has recently become feasible for some tasks, modeling Atari games accurately enough to derive successful behaviors has remained an open challenge for many years. We introduce DreamerV2, a reinforcement learning agent that learns behaviors purely from predictions in the compact latent space of a powerful world model. The world model uses discrete representations and is trained separately from the policy. DreamerV2 constitutes the first agent that achieves human-level performance on the Atari benchmark of 55 tasks by learning behaviors inside a separately trained world model. With the same computational budget and wall-clock time, DreamerV2 reaches 200M frames and exceeds the final performance of the top single-GPU agents IQN and Rainbow.
Authors: Danijar Hafner, Timothy Lillicrap, Mohammad Norouzi, Jimmy Ba
Links:
TabNine Code Completion (Referral):
YouTube:
Twitter:
Discord:
BitChute:
Minds:
Parler:
LinkedIn:
BiliBili:
If you want to support me, the best thing to do is to share out the content 🙂
If you want to support me financially (completely optional and voluntary, but a lot of people have asked for this):
SubscribeStar:
Patreon:
Bitcoin (BTC): bc1q49lsw3q325tr58ygf8sudx2dqfguclvngvy2cq
Ethereum (ETH): 0x7ad3513E3B8f66799f507Aa7874b1B0eBC7F85e2
Litecoin (LTC): LQW2TRyKYetVC8WjFkhpPhtpbDM4Vw7r9m
Monero (XMR): 4ACL8AGrEo5hAir8A9CeVrW8pEauWvnp1WnSDZxW7tziCDLhZAGsgzhRQABDnFy8yuM9fWJDviJPHKRjV4FWt19CJZN9D4n
source