Atari Games 100k
16 papers with code • 1 benchmarks • 1 datasets
Libraries
Use these libraries to find Atari Games 100k models and implementationsMost implemented papers
Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model
When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.
CURL: Contrastive Unsupervised Representations for Reinforcement Learning
On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features.
Mastering Diverse Domains through World Models
Developing a general algorithm that learns to solve tasks across a wide range of applications has been a fundamental challenge in artificial intelligence.
Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels
We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training.
Mastering Atari Games with Limited Data
Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal.
Bigger, Better, Faster: Human-level Atari with human-level efficiency
We introduce a value-based RL agent, which we call BBF, that achieves super-human performance in the Atari 100K benchmark.
Q-Star Meets Scalable Posterior Sampling: Bridging Theory and Practice via HyperAgent
We propose HyperAgent, a reinforcement learning (RL) algorithm based on the hypermodel framework for exploration in RL.
Model-Based Reinforcement Learning for Atari
We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting.
Transformers are Sample-Efficient World Models
Deep reinforcement learning agents are notoriously sample inefficient, which considerably limits their application to real-world problems.
Data-Efficient Reinforcement Learning with Self-Predictive Representations
We further improve performance by adding data augmentation to the future prediction loss, which forces the agent's representations to be consistent across multiple views of an observation.