Meta-Reinforcement Learning in Broad and Non-Parametric Environments

8 Aug 2021  ·  Zhenshan Bing, Lukas Knak, Fabrice Oliver Robin, Kai Huang, Alois Knoll ·

Recent state-of-the-art artificial agents lack the ability to adapt rapidly to new tasks, as they are trained exclusively for specific objectives and require massive amounts of interaction to learn new skills. Meta-reinforcement learning (meta-RL) addresses this challenge by leveraging knowledge learned from training tasks to perform well in previously unseen tasks. However, current meta-RL approaches limit themselves to narrow parametric task distributions, ignoring qualitative differences between tasks that occur in the real world. In this paper, we introduce TIGR, a Task-Inference-based meta-RL algorithm using Gaussian mixture models (GMM) and gated Recurrent units, designed for tasks in non-parametric environments. We employ a generative model involving a GMM to capture the multi-modality of the tasks. We decouple the policy training from the task-inference learning and efficiently train the inference mechanism on the basis of an unsupervised reconstruction objective. We provide a benchmark with qualitatively distinct tasks based on the half-cheetah environment and demonstrate the superior performance of TIGR compared to state-of-the-art meta-RL approaches in terms of sample efficiency (3-10 times faster), asymptotic performance, and applicability in non-parametric environments with zero-shot adaptation.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here