We introduce NoisyNet, a deep reinforcement learning agent with parametric noise added to its weights, and show that the induced stochasticity of the agent's policy can be used to aid efficient exploration.
We report a method to convert discrete representations of molecules to and from a multidimensional continuous representation.
In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience.
Text-based adventure games provide a platform on which to explore reinforcement learning in the context of a combinatorial action space, such as natural language.
Ordinary stochastic neural networks mostly rely on the expected values of their weights to make predictions, whereas the induced noise is mostly used to capture the uncertainty, prevent overfitting and slightly boost the performance through test-time averaging.
The MAP-Elites algorithm produces a set of high-performing solutions that vary according to features defined by the user.