no code implementations • 22 Jul 2024 • Claudius Kienle, Benjamin Alt, Onur Celik, Philipp Becker, Darko Katic, Rainer Jäkel, Gerhard Neumann

High-level robot skills represent an increasingly popular paradigm in robot programming.

no code implementations • 21 Jun 2024 • Philipp Becker, Niklas Freymuth, Gerhard Neumann

We propose KalMamba, an efficient architecture to learn representations for RL that combines the strengths of probabilistic SSMs with the scalability of deterministic SSMs.

1 code implementation • 20 Jun 2024 • Niklas Freymuth, Philipp Dahlinger, Tobias Würth, Philipp Becker, Aleksandar Taranovic, Onno Grönheim, Luise Kärger, Gerhard Neumann

To balance computational speed and accuracy meshes with adaptive resolution are used, allocating more resources to critical parts of the geometry.

no code implementations • 7 Mar 2024 • Fabian Otto, Philipp Becker, Ngo Anh Vien, Gerhard Neumann

This transfer to deep methods is not straightforward and requires novel design choices such as robust policy updates, twin value function networks to avoid an optimization bias, and importance weight clipping.

1 code implementation • 31 Oct 2023 • Philipp Dahlinger, Philipp Becker, Maximilian Hüttenrauch, Gerhard Neumann

Before each update, it solves the trust region problem for an optimal step size, resulting in a more stable and faster optimization process.

1 code implementation • 11 Apr 2023 • Maximilian Xiling Li, Onur Celik, Philipp Becker, Denis Blessing, Rudolf Lioutikov, Gerhard Neumann

Learning skills by imitation is a promising concept for the intuitive teaching of robots.

1 code implementation • 10 Feb 2023 • Philipp Becker, Sebastian Mossburger, Fabian Otto, Gerhard Neumann

Here, different self-supervised loss functions have distinct advantages and limitations depending on the information density of the underlying sensor modality.

1 code implementation • 17 Oct 2022 • Philipp Becker, Gerhard Neumann

We show that RSSMs use a suboptimal inference scheme and that models trained using this inference overestimate the aleatoric uncertainty of the ground truth system.

Model-based Reinforcement Learning
reinforcement-learning
**+3**

1 code implementation • 17 Oct 2022 • Niklas Freymuth, Nicolas Schreiber, Philipp Becker, Aleksandar Taranovic, Gerhard Neumann

We find that the geometric descriptors greatly help in generalizing to new task configurations and that combining them with our distribution-matching objective is crucial for representing and reproducing versatile behavior.

1 code implementation • ICLR 2022 • Vaisakh Shaj, Dieter Buchler, Rohit Sonker, Philipp Becker, Gerhard Neumann

Recurrent State-space models (RSSMs) are highly expressive models for learning patterns in time series data and system identification.

no code implementations • 27 May 2022 • Moritz Reuss, Niels van Duijkeren, Robert Krug, Philipp Becker, Vaisakh Shaj, Gerhard Neumann

These models need to precisely capture the robot dynamics, which consist of well-understood components, e. g., rigid body dynamics, and effects that remain challenging to capture, e. g., stick-slip friction and mechanical flexibilities.

1 code implementation • 8 Dec 2021 • Onur Celik, Dongzhuoran Zhou, Ge Li, Philipp Becker, Gerhard Neumann

This local and incremental learning results in a modular MoE model of high accuracy and versatility, where both properties can be scaled by adding more components on the fly.

no code implementations • 16 Nov 2021 • Giao Nguyen-Quynh, Philipp Becker, Chen Qiu, Maja Rudolph, Gerhard Neumann

In addition, driving data can often be multimodal in distribution, meaning that there are distinct predictions that are likely, but averaging can hurt model performance.

no code implementations • 15 Nov 2021 • Niklas Freymuth, Philipp Becker, Gerhard Neumann

Inverse Reinforcement Learning infers a reward function from expert demonstrations, aiming to encode the behavior and intentions of the expert.

1 code implementation • ICLR 2021 • Fabian Otto, Philipp Becker, Ngo Anh Vien, Hanna Carolin Ziesche, Gerhard Neumann

However, enforcing such trust regions in deep reinforcement learning is difficult.

2 code implementations • 20 Oct 2020 • Vaisakh Shaj, Philipp Becker, Dieter Buchler, Harit Pandya, Niels van Duijkeren, C. James Taylor, Marc Hanheide, Gerhard Neumann

We adopt a recent probabilistic recurrent neural network architecture, called Re-current Kalman Networks (RKNs), to model learning by conditioning its transition dynamics on the control actions.

1 code implementation • ICLR 2020 • Philipp Becker, Oleg Arenz, Gerhard Neumann

Such behavior is appealing whenever we deal with highly multi-modal data where modelling single modes correctly is more important than covering all the modes.

3 code implementations • 17 May 2019 • Philipp Becker, Harit Pandya, Gregor Gebhardt, Cheng Zhao, James Taylor, Gerhard Neumann

In order to integrate uncertainty estimates into deep time-series modelling, Kalman Filters (KFs) (Kalman et al., 1960) have been integrated with deep learning models, however, such approaches typically rely on approximate inference techniques such as variational inference which makes learning more complex and often less scalable due to approximation errors.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.