Learning Modular Robot Control Policies

20 May 2021  ·  Julian Whitman, Matthew Travers, Howie Choset ·

Modular robots can be rearranged into a new design, perhaps each day, to handle a wide variety of tasks by forming a customized robot for each new task. However, reconfiguring just the mechanism is not sufficient: each design also requires its own unique control policy. One could craft a policy from scratch for each new design, but such an approach is not scalable, especially given the large number of designs that can be generated from even a small set of modules. Instead, we create a modular policy framework where the policy structure is conditioned on the hardware arrangement, and use just one training process to create a policy that controls a wide variety of designs. Our approach leverages the fact that the kinematics of a modular robot can be represented as a design graph, with nodes as modules and edges as connections between them. Given a robot, its design graph is used to create a policy graph with the same structure, where each node contains a deep neural network, and modules of the same type share knowledge via shared parameters (e.g., all legs on a hexapod share the same network parameters). We developed a model-based reinforcement learning algorithm, interleaving model learning and trajectory optimization to train the policy. We show the modular policy generalizes to a large number of designs that were not seen during training without any additional learning. Finally, we demonstrate the policy controlling a variety of designs to locomote with both simulated and real robots.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here