Atari Games

276 papers with code • 64 benchmarks • 6 datasets

The Atari 2600 Games task (and dataset) involves training an agent to achieve high game scores.

( Image credit: Playing Atari with Deep Reinforcement Learning )

Libraries

Use these libraries to find Atari Games models and implementations
12 papers
2,399
11 papers
1,150
7 papers
2,302
See all 23 libraries.

Most implemented papers

IMPALA: Scalable Distributed Deep-RL with Importance Weighted Actor-Learner Architectures

deepmind/scalable_agent ICML 2018

In this work we aim to solve a large collection of tasks using a single reinforcement learning agent with a single set of parameters.

An Intriguing Failing of Convolutional Neural Networks and the CoordConv Solution

uber-research/coordconv NeurIPS 2018

In this paper we show a striking counterexample to this intuition via the seemingly trivial coordinate transform problem, which simply requires learning a mapping between coordinates in (x, y) Cartesian space and one-hot pixel space.

Implicit Quantile Networks for Distributional Reinforcement Learning

google/dopamine ICML 2018

In this work, we build on recent advances in distributional reinforcement learning to give a generally applicable, flexible, and state-of-the-art distributional variant of DQN.

Exploration by Random Network Distillation

openai/random-network-distillation ICLR 2019

In particular we establish state of the art performance on Montezuma's Revenge, a game famously difficult for deep reinforcement learning methods.

Mastering Atari, Go, Chess and Shogi by Planning with a Learned Model

werner-duvaud/muzero-general 19 Nov 2019

When evaluated on Go, chess and shogi, without any knowledge of the game rules, MuZero matched the superhuman performance of the AlphaZero algorithm that was supplied with the game rules.

Distributional Reinforcement Learning with Quantile Regression

DLR-RM/stable-baselines3 27 Oct 2017

In this paper, we build on recent work advocating a distributional approach to reinforcement learning in which the distribution over returns is modeled explicitly instead of only estimating the mean.

Decision Transformer: Reinforcement Learning via Sequence Modeling

kzl/decision-transformer NeurIPS 2021

In particular, we present Decision Transformer, an architecture that casts the problem of RL as conditional sequence modeling.

Benchmarking Deep Reinforcement Learning for Continuous Control

rllab/rllab 22 Apr 2016

Recently, researchers have made significant progress combining the advances in deep learning for learning feature representations with reinforcement learning.

Noisy Networks for Exploration

opendilab/DI-engine ICLR 2018

We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration.

Distributed Prioritized Experience Replay

ray-project/ray ICLR 2018

We propose a distributed architecture for deep reinforcement learning at scale, that enables agents to learn effectively from orders of magnitude more data than previously possible.