Generalizing Curricula for Reinforcement Learning

Curriculum learning for reinforcement learning (RL) is an active area of research that seeks to speed up training of RL agents on a target task by first training them through a series of progressively more challenging source tasks. Each task in this sequence builds upon skills learned in previous tasks to gradually develop the repertoire needed to solve the final task. Over the past few years, many automated methods to develop curricula have been developed. However, they all have one key limitation: the curriculum must be regenerated from scratch for each new agent or task encountered. In many cases, this generation process can be very expensive. However, there is structure that can be exploited between tasks and agents, such that knowledge gained developing a curriculum for one task can be reused to speed up creating a curriculum for a new task. In this paper, we present a method to generalize a curriculum learned for one set of tasks to a novel set of unseen tasks.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here