Atari Games 100k

11 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Atari Games 100k models and implementations

Datasets


Most implemented papers

Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

werner-duvaud/muzero-general 19 Nov 2019

When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.

CURL: Contrastive Unsupervised Representations for Reinforcement Learning

MishaLaskin/curl 8 Apr 2020

On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features.

Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels

denisyarats/drq ICLR 2021

We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training.

Mastering Atari Games with Limited Data

werner-duvaud/muzero-general NeurIPS 2021

Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal.

Model-Based Reinforcement Learning for Atari

tensorflow/tensor2tensor 1 Mar 2019

We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting.

Data-Efficient Reinforcement Learning with Self-Predictive Representations

mila-iqia/spr ICLR 2021

We further improve performance by adding data augmentation to the future prediction loss, which forces the agent's representations to be consistent across multiple views of an observation.

Pretraining Representations for Data-Efficient Reinforcement Learning

mila-iqia/SGI NeurIPS 2021

Data efficiency is a key challenge for deep reinforcement learning.

The Primacy Bias in Deep Reinforcement Learning

evgenii-nikishin/rl_with_resets 16 May 2022

This work identifies a common flaw of deep reinforcement learning (RL) algorithms: a tendency to rely on early interactions and ignore useful evidence encountered later.

Transformers are Sample-Efficient World Models

eloialonso/iris 1 Sep 2022

Deep reinforcement learning agents are notoriously sample inefficient, which considerably limits their application to real-world problems.

Pretraining the Vision Transformer using self-supervised methods for vision based Deep Reinforcement Learning

mgoulao/tov-vicreg 22 Sep 2022

The Vision Transformer architecture has shown to be competitive in the computer vision (CV) space where it has dethroned convolution-based networks in several benchmarks.