Interactive Dynamic Walking: Learning Gait Switching Policies with Generalization Guarantees

28 Sep 2021  ·  Prem Chand, Sushant Veer, Ioannis Poulakakis ·

In this paper, we consider the problem of adapting a dynamically walking bipedal robot to follow a leading co-worker while engaging in tasks that require physical interaction. Our approach relies on switching among a family of Dynamic Movement Primitives (DMPs) as governed by a supervisor. We train the supervisor to orchestrate the switching among the DMPs in order to adapt to the leader's intentions, which are only implicitly available in the form of interaction forces. The primary contribution of our approach is its ability to furnish certificates of generalization to novel leader intentions for the trained supervisor. This is achieved by leveraging the Probably Approximately Correct (PAC)-Bayes bounds from generalization theory. We demonstrate the efficacy of our approach by training a neural-network supervisor to adapt the gait of a dynamically walking biped to a leading collaborator whose intended trajectory is not known explicitly.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here