Atari Games 100k
14 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find Atari Games 100k models and implementationsMost implemented papers
Transformers are Sample-Efficient World Models
Deep reinforcement learning agents are notoriously sample inefficient, which considerably limits their application to real-world problems.
Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning
With this work, we hope to provide some insights into the representations learned by ViT during a self-supervised pretraining with observations from RL environments and which properties arise in the representations that lead to the best-performing agents.
On the Feasibility of Cross-Task Transfer with Model-Based Reinforcement Learning
Reinforcement Learning (RL) algorithms can solve challenging control problems directly from image observations, but they often require millions of environment interactions to do so.
STORM: Efficient Stochastic Transformer based World Models for Reinforcement Learning
The performance of these algorithms heavily relies on the sequence modeling and generation capabilities of the world model.