DeepKoCo: Efficient latent planning with a task-relevant Koopman representation

25 Nov 2020  ·  Bas van der Heijden, Laura Ferranti, Jens Kober, Robert Babuska ·

This paper presents DeepKoCo, a novel model-based agent that learns a latent Koopman representation from images. This representation allows DeepKoCo to plan efficiently using linear control methods, such as linear model predictive control. Compared to traditional agents, DeepKoCo learns task-relevant dynamics, thanks to the use of a tailored lossy autoencoder network that allows DeepKoCo to learn latent dynamics that reconstruct and predict only observed costs, rather than all observed dynamics. As our results show, DeepKoCo achieves similar final performance as traditional model-free methods on complex control tasks while being considerably more robust to distractor dynamics, making the proposed agent more amenable for real-life applications.

PDF Abstract
No code implementations yet. Submit your code now

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods