74 papers with code • 0 benchmarks • 2 datasets
Efficient Exploration is one of the main obstacles in scaling up modern deep reinforcement learning algorithms. The main challenge in Efficient Exploration is the balance between exploiting current estimates, and gaining information about poorly understood states and actions.
We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration.
Ranked #1 on Atari Games on Atari 2600 Freeway
Hamiltonian Monte Carlo (HMC) sampling methods provide a mechanism for defining distant proposals with high acceptance probabilities in a Metropolis-Hastings framework, enabling more efficient exploration of the state space than standard random-walk proposals.
In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience.
We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation.
We present a new method for computing a generalised state visit-count, which allows the agent to estimate the uncertainty associated with any state.
Ranked #9 on Atari Games on Atari 2600 Montezuma's Revenge