Browse > Methodology > Efficient Exploration

Efficient Exploration

24 papers with code · Methodology

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

Deep Exploration via Bootstrapped DQN

NeurIPS 2016 tensorflow/models

Efficient exploration in complex environments remains a major challenge for reinforcement learning.

ATARI GAMES EFFICIENT EXPLORATION

Noisy Networks for Exploration

ICLR 2018 chainer/chainerrl

We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration.

ATARI GAMES EFFICIENT EXPLORATION

Automatic chemical design using a data-driven continuous representation of molecules

7 Oct 2016maxhodak/keras-molecules

We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation.

EFFICIENT EXPLORATION

Efficient Off-Policy Meta-Reinforcement Learning via Probabilistic Context Variables

19 Mar 2019katerakelly/oyster

In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience.

EFFICIENT EXPLORATION

NSGA-Net: Neural Architecture Search using Multi-Objective Genetic Algorithm

8 Oct 2018ianwhale/nsga-net

This paper introduces NSGA-Net -- an evolutionary approach for neural architecture search (NAS).

EFFICIENT EXPLORATION NEURAL ARCHITECTURE SEARCH OBJECT CLASSIFICATION

Playing Text-Adventure Games with Graph-Based Deep Reinforcement Learning

NAACL 2019 rajammanabrolu/KG-DQN

Text-based adventure games provide a platform on which to explore reinforcement learning in the context of a combinatorial action space, such as natural language.

EFFICIENT EXPLORATION QUESTION ANSWERING TRANSFER LEARNING

Model-Based Active Exploration

29 Oct 2018nnaisense/max

Efficient exploration is an unsolved problem in Reinforcement Learning which is usually addressed by reactively rewarding the agent for fortuitously encountering novel situations.

EFFICIENT EXPLORATION

Efficient Exploration via State Marginal Matching

12 Jun 2019RLAgent/state-marginal-matching

We recast exploration as a problem of State Marginal Matching (SMM), where we aim to learn a policy for which the state marginal distribution matches a given target state distribution, which can incorporate prior knowledge about the task.

EFFICIENT EXPLORATION

Variance Networks: When Expectation Does Not Meet Your Expectations

ICLR 2019 da-molchanov/variance-networks

Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging.

EFFICIENT EXPLORATION