1 code implementation • 21 Jan 2024 • Ge Li, Hongyi Zhou, Dominik Roth, Serge Thilges, Fabian Otto, Rudolf Lioutikov, Gerhard Neumann
Current advancements in reinforcement learning (RL) have predominantly focused on learning step-based policies that generate actions for each perceived state.
no code implementations • 15 Dec 2023 • Paul Maria Scheikl, Nicolas Schreiber, Christoph Haas, Niklas Freymuth, Gerhard Neumann, Rudolf Lioutikov, Franziska Mathis-Ullrich
Policy learning in robot-assisted surgery (RAS) lacks data efficient and versatile methods that exhibit the desired motion quality for delicate surgical interventions.
no code implementations • 22 Jun 2023 • Fabian Otto, Hongyi Zhou, Onur Celik, Ge Li, Rudolf Lioutikov, Gerhard Neumann
We introduce a novel deep reinforcement learning (RL) approach called Movement Primitive-based Planning Policy (MP3).
1 code implementation • 11 Apr 2023 • Maximilian Xiling Li, Onur Celik, Philipp Becker, Denis Blessing, Rudolf Lioutikov, Gerhard Neumann
Learning skills by imitation is a promising concept for the intuitive teaching of robots.
1 code implementation • 5 Apr 2023 • Moritz Reuss, Maximilian Li, Xiaogang Jia, Rudolf Lioutikov
To the best of our knowledge this is the first work that a) represents a behavior policy based on such a decoupled SDM b) learns an SDM based policy in the domain of GCIL and c) provides a way to simultaneously learn a goal-dependent and a goal-independent policy from play-data.
1 code implementation • 27 Mar 2023 • Denis Blessing, Onur Celik, Xiaogang Jia, Moritz Reuss, Maximilian Xiling Li, Rudolf Lioutikov, Gerhard Neumann
Imitation learning uses data for training policies to solve complex tasks.
no code implementations • 4 Oct 2022 • Ge Li, Zeqi Jin, Michael Volpp, Fabian Otto, Rudolf Lioutikov, Gerhard Neumann
MPs can be broadly categorized into two types: (a) dynamics-based approaches that generate smooth trajectories from any initial state, e. g., Dynamic Movement Primitives (DMPs), and (b) probabilistic approaches that capture higher-order statistics of the motion, e. g., Probabilistic Movement Primitives (ProMPs).
1 code implementation • 12 Aug 2021 • Ajinkya Jain, Stephen Giguere, Rudolf Lioutikov, Scott Niekum
Our core contributions include a novel representation for distributions over rigid body transformations and articulation model parameters based on screw theory, von Mises-Fisher distributions, and Stiefel manifolds.
1 code implementation • 8 Mar 2021 • Farzan Memarian, Wonjoon Goo, Rudolf Lioutikov, Scott Niekum, Ufuk Topcu
We introduce Self-supervised Online Reward Shaping (SORS), which aims to improve the sample efficiency of any RL algorithm in sparse-reward environments by automatically densifying rewards.
1 code implementation • 24 Aug 2020 • Ajinkya Jain, Rudolf Lioutikov, Caleb Chuck, Scott Niekum
Robots in human environments will need to interact with a wide variety of articulated objects such as cabinets, drawers, and dishwashers while assisting humans in performing day-to-day tasks.
1 code implementation • 29 May 2018 • Maximilian Sieb, Matthias Schultheis, Sebastian Szelag, Rudolf Lioutikov, Jan Peters
Using movement primitive libraries is an effective means to enable robots to solve more complex tasks.
no code implementations • NeurIPS 2015 • Abbas Abdolmaleki, Rudolf Lioutikov, Jan R. Peters, Nuno Lau, Luis Pualo Reis, Gerhard Neumann
Stochastic search algorithms are general black-box optimizers.