|TREND||DATASET||BEST METHOD||PAPER TITLE||PAPER||CODE||COMPARE|
Therefore, if the aim of these methods is to enable faster acquisition of entirely new behaviors, we must evaluate them on task distributions that are sufficiently broad to enable generalization to new behaviors.
Ranked #1 on Meta-Learning on ML10
Learning a new task often requires both exploring to gather task-relevant information and exploiting this information to solve the task.
In our approach, we perform online probabilistic filtering of latent task variables to infer how to solve a new task from small amounts of experience.
We consider the problem of exploration in meta reinforcement learning.
Although reinforcement learning methods can achieve impressive results in simulation, the real world presents two major challenges: generating samples is exceedingly expensive, and unexpected perturbations or unseen situations cause proficient but specialized policies to fail at test time.
There has been rapidly growing interest in meta-learning as a method for increasing the flexibility and sample efficiency of reinforcement learning.