A Versatile Agent for Fast Learning from Human Instructors

1 Mar 2022  ·  YiWen Chen, Zedong Zhang, Haofeng Liu, Jiayi Tan, Chee-Meng Chew, Marcelo Ang ·

In recent years, a myriad of superlative works on intelligent robotics policies have been done, thanks to advances in machine learning. However, inefficiency and lack of transfer ability hindered algorithms from pragmatic applications, especially in human-robot collaboration, when few-shot fast learning and high flexibility become a wherewithal. To surmount this obstacle, we refer to a "Policy Pool", containing pre-trained skills that can be easily accessed and reused. An agent is employed to govern the "Policy Pool" by unfolding requisite skills in a flexible sequence, contingent on task specific predilection. This predilection can be automatically interpreted from one or few human expert demonstrations. Under this hierarchical setting, our algorithm is able to pick up a sparse-reward, multi-stage knack with only one demonstration in a Mini-Grid environment, showing the potential for instantly mastering complex robotics skills from human instructors. Additionally, the innate quality of our algorithm also allows for lifelong learning, making it a versatile agent.

PDF Abstract


  Add Datasets introduced or used in this paper

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.


No methods listed for this paper. Add relevant methods here