Atari Games 100k

14 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?

Libraries

Use these libraries to find Atari Games 100k models and implementations

Datasets


Most implemented papers

Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

werner-duvaud/muzero-general 19 Nov 2019

When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.

CURL: Contrastive Unsupervised Representations for Reinforcement Learning

MishaLaskin/curl 8 Apr 2020

On the DeepMind Control Suite, CURL is the first image-based algorithm to nearly match the sample-efficiency of methods that use state-based features.

Mastering Diverse Domains through World Models

danijar/dreamerv3 10 Jan 2023

General intelligence requires solving tasks across many domains.

Image Augmentation Is All You Need: Regularizing Deep Reinforcement Learning from Pixels

denisyarats/drq ICLR 2021

We propose a simple data augmentation technique that can be applied to standard model-free reinforcement learning algorithms, enabling robust learning directly from pixels without the need for auxiliary losses or pre-training.

Mastering Atari Games with Limited Data

werner-duvaud/muzero-general NeurIPS 2021

Recently, there has been significant progress in sample efficient image-based RL algorithms; however, consistent human-level performance on the Atari game benchmark remains an elusive goal.

Bigger, Better, Faster: Human-level Atari with human-level efficiency

google-research/google-research 30 May 2023

We introduce a value-based RL agent, which we call BBF, that achieves super-human performance in the Atari 100K benchmark.

Model-Based Reinforcement Learning for Atari

tensorflow/tensor2tensor 1 Mar 2019

We describe Simulated Policy Learning (SimPLe), a complete model-based deep RL algorithm based on video prediction models and present a comparison of several model architectures, including a novel architecture that yields the best results in our setting.

Data-Efficient Reinforcement Learning with Self-Predictive Representations

mila-iqia/spr ICLR 2021

We further improve performance by adding data augmentation to the future prediction loss, which forces the agent's representations to be consistent across multiple views of an observation.

Pretraining Representations for Data-Efficient Reinforcement Learning

mila-iqia/SGI NeurIPS 2021

Data efficiency is a key challenge for deep reinforcement learning.

The Primacy Bias in Deep Reinforcement Learning

evgenii-nikishin/rl_with_resets 16 May 2022

This work identifies a common flaw of deep reinforcement learning (RL) algorithms: a tendency to rely on early interactions and ignore useful evidence encountered later.