Probabilistic Model Learning and Long-term Prediction for Contact-rich Manipulation Tasks

11 Sep 2019  ·  Shahbaz Abdul Khader, Hang Yin, Pietro Falco, Danica Kragic ·

Learning dynamics models is an essential component of model-based reinforcement learning. The learned model can be used for multi-step ahead predictions of the state variable, a process referred to as long-term prediction. Due to the recursive nature of the predictions, the accuracy has to be good enough to prevent significant error buildup. Accurate model learning in contact-rich manipulation is challenging due to the presence of varying dynamics regimes and discontinuities. Another challenge is the discontinuity in state evolution caused by impacting conditions. Building on the approach of representing contact dynamics by a system of switching models, we present a solution that also supports discontinuous state evolution. We evaluate our method on a contact-rich motion task, involving a 7-DOF industrial robot, using a trajectory-centric policy and show that it can effectively propagate state distributions through discontinuities.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Robotics

Datasets


  Add Datasets introduced or used in this paper