Atari Games 100k
11 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find Atari Games 100k models and implementationsMost implemented papers
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features.
Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training.
Mastering Atari Games with Limited Data
Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal.
Model-Based Reinforcement Learning for Atari
We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting.
Data-Efficient Reinforcement Learning with Self-Predictive Representations
We further improve performance by adding data augmentation to the future prediction loss, which forces the agent's representations to be consistent across multiple views of an observation.
Pretraining Representations for Data-Efficient Reinforcement Learning
Data efficiency is a key challenge for deep reinforcement learning.
The Primacy Bias in Deep Reinforcement Learning
This work identifies a common flaw of deep reinforcement learning (RL) algorithms: a tendency to rely on early interactions and ignore useful evidence encountered later.
Transformers are Sample-Efficient World Models
Deep reinforcement learning agents are notoriously sample inefficient, which considerably limits their application to real-world problems.
Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning
The Vision Transformer architecture has shown to be competitive in the computer vision (CV) space where it has dethroned convolution-based networks in several benchmarks.