Efficient Exploration

74 papers with code • 0 benchmarks • 2 datasets

Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.

Source: Randomized Value Functions via Multiplicative Normalizing Flows

Greatest papers with code

Deep Exploration via Bootstrapped DQN

tensorflow/models NeurIPS 2016

Efficient exploration in complex environments remains a major challenge for reinforcement learning.

Atari Games Efficient Exploration

BeBold: Exploration Beyond the Boundary of Explored Regions

maximecb/gym-minigrid 15 Dec 2020

In this paper, we analyze the pros and cons of each method and propose the regulated difference of inverse visitation counts as a simple but effective criterion for IR.

Curriculum Learning Efficient Exploration +1

Noisy Networks for Exploration

Curt-Park/rainbow-is-all-you-need ICLR 2018

We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration.

Atari Games Efficient Exploration

Stochastic Gradient Hamiltonian Monte Carlo

JavierAntoran/Bayesian-Neural-Networks 17 Feb 2014

Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining distant proposals with high acceptance probabilities in a Metropolis-Hastings framework, enabling more efficient exploration of the state space than standard random-walk proposals.

Efficient Exploration

Batch Bayesian Optimization via Local Penalization

SheffieldML/GPyOpt 29 May 2015

The approach assumes that the function of interest, $f$, is a Lipschitz continuous function.

Bayesian Optimisation Efficient Exploration +1

Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables

katerakelly/oyster 19 Mar 2019

In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience.

Efficient Exploration Meta Reinforcement Learning

Automatic chemical design using a data-driven continuous representation of molecules

aspuru-guzik-group/chemical_vae 7 Oct 2016

We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation.

Efficient Exploration

NSGA-Net: Neural Architecture Search using Multi-Objective Genetic Algorithm

ianwhale/nsga-net 8 Oct 2018

This paper introduces NSGA-Net -- an evolutionary approach for neural architecture search (NAS).

Efficient Exploration Neural Architecture Search +1

Model-Based Active Exploration

ramanans1/plan2explore 29 Oct 2018

Efficient exploration is an unsolved problem in Reinforcement Learning which is usually addressed by reactively rewarding the agent for fortuitously encountering novel situations.

Efficient Exploration

Count-Based Exploration in Feature Space for Reinforcement Learning

aslanides/aixijs 25 Jun 2017

We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state.

Atari Games Efficient Exploration