Search Results for author: Jan Peters

Found 184 papers, 62 papers with code

Stable Port-Hamiltonian Neural Networks

no code implementations4 Feb 2025 Fabian J. Roth, Dominik K. Klein, Maximilian Kannapinn, Jan Peters, Oliver Weeger

In recent years, nonlinear dynamic system identification using artificial neural networks has garnered attention due to its manifold potential applications in virtually all branches of science and engineering.

Noise-conditioned Energy-based Annealed Rewards (NEAR): A Generative Framework for Imitation Learning from Observation

no code implementations24 Jan 2025 Anish Abhijit Diwan, Julen Urain, Jens Kober, Jan Peters

This paper introduces a new imitation learning framework based on energy-based generative models capable of learning complex, physics-dependent, robot motion policies through state-only expert motion trajectories.

Denoising Imitation Learning

Diminishing Return of Value Expansion Methods

1 code implementation29 Dec 2024 Daniel Palenicek, Michael Lutter, João Carvalho, Daniel Dennert, Faran Ahmad, Jan Peters

Model-based reinforcement learning aims to increase sample efficiency, but the accuracy of dynamics models and the resulting compounding errors are often seen as key limitations.

Model-based Reinforcement Learning reinforcement-learning +1

Fast and Robust Visuomotor Riemannian Flow Matching Policy

no code implementations14 Dec 2024 Haoran Ding, Noémie Jaquier, Jan Peters, Leonel Rozo

Diffusion-based visuomotor policies excel at learning complex robotic tasks by effectively combining visual data with high-dimensional, multi-modal action distributions.

Denoising

Grasp Diffusion Network: Learning Grasp Generators from Partial Point Clouds with Diffusion Models in SO(3)xR3

no code implementations11 Dec 2024 Joao Carvalho, An T. Le, Philipp Jahr, Qiao Sun, Julen Urain, Dorothea Koert, Jan Peters

An approach to solve this problem is to leverage simulation to create large datasets of pairs of objects and grasp poses, and then learn a conditional generative model that can be prompted quickly during deployment.

Collision Avoidance Robot Manipulation

Particle-based 6D Object Pose Estimation from Point Clouds using Diffusion Models

1 code implementation1 Dec 2024 Christian Möller, Niklas Funk, Jan Peters

To account for this multimodality, this work proposes training a diffusion-based generative model for 6D object pose estimation.

6D Pose Estimation using RGB Object

Global Tensor Motion Planning

1 code implementation28 Nov 2024 An T. Le, Kay Hansel, João Carvalho, Joe Watson, Julen Urain, Armin Biess, Georgia Chalvatzaki, Jan Peters

Batch planning is increasingly necessary to quickly produce diverse and high-quality motion plans for downstream learning applications, such as distillation and imitation learning.

Dataset Generation Diversity +2

Unsupervised Skill Discovery for Robotic Manipulation through Automatic Task Generation

no code implementations7 Oct 2024 Paul Jansonnie, Bingbing Wu, Julien Perez, Jan Peters

Furthermore, the learned skills can be used to solve a set of unseen manipulation tasks, in simulation as well as on a real robotic platform.

Hierarchical Reinforcement Learning

Handling Long-Term Safety and Uncertainty in Safe Reinforcement Learning

1 code implementation18 Sep 2024 Jonas Günster, Puze Liu, Jan Peters, Davide Tateo

Safety is one of the key issues preventing the deployment of reinforcement learning techniques in real-world robots.

reinforcement-learning Reinforcement Learning +2

One Policy to Run Them All: an End-to-end Learning Approach to Multi-Embodiment Locomotion

1 code implementation10 Sep 2024 Nico Bohlinger, Grzegorz Czechmanowski, Maciej Krupka, Piotr Kicki, Krzysztof Walas, Jan Peters, Davide Tateo

Our experiments show that URMA can learn a locomotion policy on multiple embodiments that can be easily transferred to unseen robot platforms in simulation and the real world.

Deep Reinforcement Learning reinforcement-learning

ActionFlow: Equivariant, Accurate, and Efficient Policies with Spatially Symmetric Flow Matching

no code implementations6 Sep 2024 Niklas Funk, Julen Urain, Joao Carvalho, Vignesh Prasad, Georgia Chalvatzaki, Jan Peters

Despite the impressive results of deep generative models in complex manipulation tasks, the absence of a representation that encodes intricate spatial relationships between observations and actions often limits spatial generalization, necessitating large amounts of demonstrations.

Action Generation Spatial Reasoning

Safe and Efficient Path Planning under Uncertainty via Deep Collision Probability Fields

no code implementations6 Sep 2024 Felix Herrmann, Sebastian Zach, Jacopo Banfi, Jan Peters, Georgia Chalvatzaki, Davide Tateo

Estimating collision probabilities between robots and environmental obstacles or other moving agents is crucial to ensure safety during path planning.

Autonomous Driving

Inverse decision-making using neural amortized Bayesian actors

1 code implementation4 Sep 2024 Dominik Straub, Tobias F. Niehues, Jan Peters, Constantin A. Rothkopf

They attribute behavioral variability and biases to interpretable entities such as perceptual and motor uncertainty, prior beliefs, and behavioral costs.

Attribute Bayesian Inference +1

Bridging the gap between Learning-to-plan, Motion Primitives and Safe Reinforcement Learning

no code implementations26 Aug 2024 Piotr Kicki, Davide Tateo, Puze Liu, Jonas Guenster, Jan Peters, Krzysztof Walas

We evaluate our approach against state-of-the-art safe reinforcement learning methods, showing that our technique, particularly when exploiting task structure, outperforms baseline methods in challenging scenarios such as planning to hit in robot air hockey.

reinforcement-learning Reinforcement Learning +2

Machine Learning with Physics Knowledge for Prediction: A Survey

no code implementations19 Aug 2024 Joe Watson, Chen Song, Oliver Weeger, Theo Gruner, An T. Le, Kay Hansel, Ahmed Hendawy, Oleg Arenz, Will Trojak, Miles Cranmer, Carlo D'Eramo, Fabian Bülow, Tanmay Goyal, Jan Peters, Martin W. Hoffman

This survey examines the broad suite of methods and models for combining machine learning with physics knowledge for prediction and forecast, with a focus on partial differential equations.

Data Augmentation Physics-informed machine learning +2

A Comparison of Imitation Learning Algorithms for Bimanual Manipulation

no code implementations13 Aug 2024 Michael Drolet, Simon Stepputtis, Siva Kailas, Ajinkya Jain, Jan Peters, Stefan Schaal, Heni Ben Amor

Amidst the wide popularity of imitation learning algorithms in robotics, their properties regarding hyperparameter sensitivity, ease of training, data efficiency, and performance have not been well-studied in high-precision industry-inspired environments.

Imitation Learning

MuJoCo MPC for Humanoid Control: Evaluation on HumanoidBench

1 code implementation1 Aug 2024 Moritz Meser, Aditya Bhatt, Boris Belousov, Jan Peters

We tackle the recently introduced benchmark for whole-body humanoid control HumanoidBench using MuJoCo MPC.

Humanoid Control MuJoCo

PianoMime: Learning a Generalist, Dexterous Piano Player from Internet Demonstrations

no code implementations25 Jul 2024 Cheng Qian, Julen Urain, Kevin Zakka, Jan Peters

In this work, we introduce PianoMime, a framework for training a piano-playing agent using internet demonstrations.

MoVEInt: Mixture of Variational Experts for Learning Human-Robot Interactions from Demonstrations

1 code implementation10 Jul 2024 Vignesh Prasad, Alap Kshirsagar, Dorothea Koert, Ruth Stock-Homburg, Jan Peters, Georgia Chalvatzaki

In this work, we propose a novel approach for learning a shared latent space representation for HRIs from demonstrations in a Mixture of Experts fashion for reactively generating robot actions from human observations.

Dude: Dual Distribution-Aware Context Prompt Learning For Large Vision-Language Model

no code implementations5 Jul 2024 Duy M. H. Nguyen, An T. Le, Trung Q. Nguyen, Nghiem T. Diep, Tai Nguyen, Duy Duong-Tran, Jan Peters, Li Shen, Mathias Niepert, Daniel Sonntag

Prompt learning methods are gaining increasing attention due to their ability to customize large vision-language models to new domains using pre-trained contextual knowledge and minimal training data.

Image Augmentation Language Modeling +1

ROS-LLM: A ROS framework for embodied AI with task feedback and structured reasoning

1 code implementation28 Jun 2024 Christopher E. Mower, Yuhui Wan, Hongzhan Yu, Antoine Grosnit, Jonas Gonzalez-Billandon, Matthieu Zimmer, Jinlong Wang, Xinyu Zhang, Yao Zhao, Anbang Zhai, Puze Liu, Daniel Palenicek, Davide Tateo, Cesar Cadena, Marco Hutter, Jan Peters, Guangjian Tian, Yuzheng Zhuang, Kun Shao, Xingyue Quan, Jianye Hao, Jun Wang, Haitham Bou-Ammar

Key features of the framework include: integration of ROS with an AI agent connected to a plethora of open-source and commercial LLMs, automatic extraction of a behavior from the LLM output and execution of ROS actions/services, support for three behavior modes (sequence, behavior tree, state machine), imitation learning for adding new robot actions to the library of possible actions, and LLM reflection via human and environment feedback.

AI Agent Imitation Learning

Adaptive $Q$-Network: On-the-fly Target Selection for Deep Reinforcement Learning

no code implementations25 May 2024 Théo Vincent, Fabian Wahren, Jan Peters, Boris Belousov, Carlo D'Eramo

Deep Reinforcement Learning (RL) is well known for being highly sensitive to hyperparameters, requiring practitioners substantial efforts to optimize them for the problem at hand.

Atari Games AutoML +4

Safe Reinforcement Learning on the Constraint Manifold: Theory and Applications

no code implementations13 Apr 2024 Puze Liu, Haitham Bou-Ammar, Jan Peters, Davide Tateo

Indeed, safety specifications, often represented as constraints, can be complex and non-linear, making safety challenging to guarantee in learning systems.

reinforcement-learning Reinforcement Learning +1

What Matters for Active Texture Recognition With Vision-Based Tactile Sensors

no code implementations20 Mar 2024 Alina Böhm, Tim Schneider, Boris Belousov, Alap Kshirsagar, Lisa Lin, Katja Doerschner, Knut Drewing, Constantin A. Rothkopf, Jan Peters

By evaluating our method on a previously published Active Clothing Perception Dataset and on a real robotic system, we establish that the choice of the active exploration strategy has only a minor influence on the recognition accuracy, whereas data augmentation and dropout rate play a significantly larger role.

Data Augmentation

Iterated $Q$-Network: Beyond One-Step Bellman Updates in Deep Reinforcement Learning

no code implementations4 Mar 2024 Théo Vincent, Daniel Palenicek, Boris Belousov, Jan Peters, Carlo D'Eramo

It has been observed that this scheme can be potentially generalized to carry out multiple iterations of the Bellman operator at once, benefiting the underlying learning algorithm.

Atari Games continuous-control +4

Information-Theoretic Safe Bayesian Optimization

no code implementations23 Feb 2024 Alessandro G. Bottero, Carlos E. Luis, Julia Vinogradska, Felix Berkenkamp, Jan Peters

In this paper, we propose an information-theoretic safe exploration criterion that directly exploits the GP posterior to identify the most informative safe parameters to evaluate.

Bayesian Optimization Decision Making +2

Structure-Aware E(3)-Invariant Molecular Conformer Aggregation Networks

1 code implementation3 Feb 2024 Duy M. H. Nguyen, Nina Lukashina, Tai Nguyen, An T. Le, TrungTin Nguyen, Nhat Ho, Jan Peters, Daniel Sonntag, Viktor Zaverkin, Mathias Niepert

Inspired by recent work on using ensembles of conformers in conjunction with 2D graph representations, we propose $\mathrm{E}$(3)-invariant molecular conformer aggregation networks.

Molecular Property Prediction Property Prediction

Sharing Knowledge in Multi-Task Deep Reinforcement Learning

1 code implementation ICLR 2020 Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, Jan Peters

We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning.

Deep Reinforcement Learning reinforcement-learning

Parameterized Projected Bellman Operator

1 code implementation20 Dec 2023 Théo Vincent, Alberto Maria Metelli, Boris Belousov, Jan Peters, Marcello Restelli, Carlo D'Eramo

We formulate an optimization problem to learn PBO for generic sequential decision-making problems, and we theoretically analyze its properties in two representative classes of RL problems.

Decision Making Reinforcement Learning (RL) +1

Peer Learning: Learning Complex Policies in Groups from Scratch via Action Recommendations

1 code implementation15 Dec 2023 Cedric Derstroff, Mattia Cerrato, Jannis Brugger, Jan Peters, Stefan Kramer

Eventually, we analyze the learning behavior of the peers and observe their ability to rank the agents' performance within the study group and understand which agents give reliable advice.

OpenAI Gym reinforcement-learning +1

Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization

no code implementations7 Dec 2023 Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters

We propose a new UBE whose solution converges to the true posterior variance over values and leads to lower regret in tabular exploration problems.

Model-based Reinforcement Learning Offline RL

Evetac: An Event-based Optical Tactile Sensor for Robotic Manipulation

no code implementations2 Dec 2023 Niklas Funk, Erik Helmut, Georgia Chalvatzaki, Roberto Calandra, Jan Peters

To overcome this shortcoming, we study the idea of replacing the RGB camera with an event-based camera and introduce a new event-based optical tactile sensor called Evetac.

Benchmarking

Learning Multimodal Latent Dynamics for Human-Robot Interaction

no code implementations27 Nov 2023 Vignesh Prasad, Lea Heitlinger, Dorothea Koert, Ruth Stock-Homburg, Jan Peters, Georgia Chalvatzaki

The generated robot motions are further adapted with Inverse Kinematics to ensure the desired physical proximity with a human, combining the ease of joint space learning and accurate task space reachability.

Multi-Task Reinforcement Learning with Mixture of Orthogonal Experts

1 code implementation19 Nov 2023 Ahmed Hendawy, Jan Peters, Carlo D'Eramo

Multi-Task Reinforcement Learning (MTRL) tackles the long-standing problem of endowing agents with skills that generalize across a variety of problems.

Diversity reinforcement-learning +2

Towards Transferring Tactile-based Continuous Force Control Policies from Simulation to Robot

no code implementations13 Nov 2023 Luca Lach, Robert Haschke, Davide Tateo, Jan Peters, Helge Ritter, Júlia Borràs, Carme Torras

The advent of tactile sensors in robotics has sparked many ideas on how robots can leverage direct contact measurements of their environment interactions to improve manipulation tasks.

Deep Reinforcement Learning Inductive Bias

Time-Efficient Reinforcement Learning with Stochastic Stateful Policies

no code implementations7 Nov 2023 Firas Al-Hafez, Guoping Zhao, Jan Peters, Davide Tateo

Stateful policies play an important role in reinforcement learning, such as handling partially observable environments, enhancing robustness, or imposing an inductive bias directly into the policy structure.

continuous-control Continuous Control +4

Robust Adversarial Reinforcement Learning via Bounded Rationality Curricula

no code implementations3 Nov 2023 Aryaman Reddi, Maximilian Tölle, Jan Peters, Georgia Chalvatzaki, Carlo D'Eramo

To this end, Robust Adversarial Reinforcement Learning (RARL) trains a protagonist against destabilizing forces exercised by an adversary in a competitive zero-sum Markov game, whose optimal solution, i. e., rational strategy, corresponds to a Nash equilibrium.

MuJoCo reinforcement-learning +2

Domain Randomization via Entropy Maximization

no code implementations3 Nov 2023 Gabriele Tiboni, Pascal Klink, Jan Peters, Tatiana Tommasi, Carlo D'Eramo, Georgia Chalvatzaki

Varying dynamics parameters in simulation is a popular Domain Randomization (DR) approach for overcoming the reality gap in Reinforcement Learning (RL).

Diversity Reinforcement Learning (RL)

On the Benefit of Optimal Transport for Curriculum Reinforcement Learning

no code implementations25 Sep 2023 Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen

In this work, we focus on framing curricula as interpolations between task distributions, which has previously been shown to be a viable approach to CRL.

reinforcement-learning Reinforcement Learning

Sampling-Free Probabilistic Deep State-Space Models

no code implementations15 Sep 2023 Andreas Look, Melih Kandemir, Barbara Rakitsch, Jan Peters

Many real-world dynamical systems can be described as State-Space Models (SSMs).

State Space Models

Value-Distributional Model-Based Reinforcement Learning

1 code implementation12 Aug 2023 Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters

Quantifying uncertainty about a policy's long-term performance is important to solve sequential decision-making tasks.

continuous-control Continuous Control +8

Function-Space Regularization for Deep Bayesian Classification

no code implementations12 Jul 2023 Jihao Andreas Lin, Joe Watson, Pascal Klink, Jan Peters

Bayesian deep learning approaches assume model parameters to be latent random variables and infer posterior distributions to quantify uncertainty, increase safety and trust, and prevent overconfident and unpredictable behavior.

Adversarial Robustness Classification +3

Cheap and Deterministic Inference for Deep State-Space Models of Interacting Dynamical Systems

1 code implementation2 May 2023 Andreas Look, Melih Kandemir, Barbara Rakitsch, Jan Peters

Furthermore, we propose structured approximations to the covariance matrices of the Gaussian components in order to scale up to systems with many agents.

Autonomous Driving State Space Models

Model Predictive Control with Gaussian-Process-Supported Dynamical Constraints for Autonomous Vehicles

no code implementations8 Mar 2023 Johanna Bethge, Maik Pfefferkorn, Alexander Rose, Jan Peters, Rolf Findeisen

We propose a model predictive control approach for autonomous vehicles that exploits learned Gaussian processes for predicting human driving behavior.

Autonomous Vehicles Gaussian Processes +1

Diminishing Return of Value Expansion Methods in Model-Based Reinforcement Learning

1 code implementation7 Mar 2023 Daniel Palenicek, Michael Lutter, Joao Carvalho, Jan Peters

Therefore, we conclude that the limitation of model-based value expansion methods is not the model accuracy of the learned models.

continuous-control Continuous Control +3

LS-IQ: Implicit Reward Regularization for Inverse Reinforcement Learning

1 code implementation1 Mar 2023 Firas Al-Hafez, Davide Tateo, Oleg Arenz, Guoping Zhao, Jan Peters

Recent methods for imitation learning directly learn a $Q$-function using an implicit reward formulation rather than an explicit reward function.

Continuous Control Imitation Learning +5

A Human-Centered Safe Robot Reinforcement Learning Framework with Interactive Behaviors

no code implementations25 Feb 2023 Shangding Gu, Alap Kshirsagar, Yali Du, Guang Chen, Jan Peters, Alois Knoll

Deployment of Reinforcement Learning (RL) algorithms for robotics applications in the real world requires ensuring the safety of the robot and its environment.

reinforcement-learning Reinforcement Learning (RL) +1

Model-Based Uncertainty in Value Functions

1 code implementation24 Feb 2023 Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters

We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.

continuous-control Continuous Control +6

Fast Kinodynamic Planning on the Constraint Manifold with Deep Neural Networks

1 code implementation11 Jan 2023 Piotr Kicki, Puze Liu, Davide Tateo, Haitham Bou-Ammar, Krzysztof Walas, Piotr Skrzypczyński, Jan Peters

Motion planning is a mature area of research in robotics with many well-established methods based on optimization or sampling the state space, suitable for solving kinematic motion planning.

Motion Planning

Information-Theoretic Safe Exploration with Gaussian Processes

1 code implementation9 Dec 2022 Alessandro G. Bottero, Carlos E. Luis, Julia Vinogradska, Felix Berkenkamp, Jan Peters

We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an a priori unknown (safety) constraint.

Decision Making Gaussian Processes +2

Hierarchical Policy Blending As Optimal Transport

no code implementations4 Dec 2022 An T. Le, Kay Hansel, Jan Peters, Georgia Chalvatzaki

We present hierarchical policy blending as optimal transport (HiPBOT).

PAC-Bayes Bounds for Bandit Problems: A Survey and Experimental Comparison

no code implementations29 Nov 2022 Hamish Flynn, David Reeb, Melih Kandemir, Jan Peters

On the one hand, we found that PAC-Bayes bounds are a useful tool for designing offline bandit algorithms with performance guarantees.

Decision Making

How Crucial is Transformer in Decision Transformer?

1 code implementation26 Nov 2022 Max Siebenborn, Boris Belousov, Junning Huang, Jan Peters

On the other hand, the proposed Decision LSTM is able to achieve expert-level performance on these tasks, in addition to learning a swing-up controller on the real system.

continuous-control Continuous Control +1

Variational Hierarchical Mixtures for Probabilistic Learning of Inverse Dynamics

no code implementations2 Nov 2022 Hany Abdulsamad, Peter Nickl, Pascal Klink, Jan Peters

We derive two efficient variational inference techniques to learn these representations and highlight the advantages of hierarchical infinite local regression models, such as dealing with non-smooth functions, mitigating catastrophic forgetting, and enabling parameter sharing and fast predictions.

regression Variational Inference

Active Exploration for Robotic Manipulation

no code implementations23 Oct 2022 Tim Schneider, Boris Belousov, Georgia Chalvatzaki, Diego Romeres, Devesh K. Jha, Jan Peters

Robotic manipulation stands as a largely unsolved problem despite significant advances in robotics and machine learning in recent years.

Model-based Reinforcement Learning Model Predictive Control

MILD: Multimodal Interactive Latent Dynamics for Learning Human-Robot Interaction

no code implementations22 Oct 2022 Vignesh Prasad, Dorothea Koert, Ruth Stock-Homburg, Jan Peters, Georgia Chalvatzaki

Modeling interaction dynamics to generate robot trajectories that enable a robot to adapt and react to a human's actions and intentions is critical for efficient and effective collaborative Human-Robot Interactions (HRI).

Representation Learning

Hierarchical Policy Blending as Inference for Reactive Robot Control

no code implementations14 Oct 2022 Kay Hansel, Julen Urain, Jan Peters, Georgia Chalvatzaki

To combine the benefits of reactive policies and planning, we propose a hierarchical motion generation method.

Decision Making Motion Generation +1

Inferring Smooth Control: Monte Carlo Posterior Policy Iteration with Gaussian Processes

1 code implementation7 Oct 2022 Joe Watson, Jan Peters

Monte Carlo methods have become increasingly relevant for control of non-differentiable systems, approximate dynamics models and learning from data.

Gaussian Processes Model Predictive Control +1

Safe Reinforcement Learning of Dynamic High-Dimensional Robotic Tasks: Navigation, Manipulation, Interaction

no code implementations27 Sep 2022 Puze Liu, Kuo Zhang, Davide Tateo, Snehal Jauhri, Zhiyuan Hu, Jan Peters, Georgia Chalvatzaki

Our proposed approach achieves state-of-the-art performance in simulated high-dimensional and dynamic tasks while avoiding collisions with the environment.

reinforcement-learning Reinforcement Learning +3

Self-supervised Sequential Information Bottleneck for Robust Exploration in Deep Reinforcement Learning

no code implementations12 Sep 2022 Bang You, Jingming Xie, Youping Chen, Jan Peters, Oleg Arenz

Recent works based on state-visitation counts, curiosity and entropy-maximization generate intrinsic reward signals to motivate the agent to visit novel states for exploration.

Deep Reinforcement Learning Efficient Exploration +4

SE(3)-DiffusionFields: Learning smooth cost functions for joint grasp and motion optimization through diffusion

1 code implementation8 Sep 2022 Julen Urain, Niklas Funk, Jan Peters, Georgia Chalvatzaki

In this work, we focus on learning SE(3) diffusion models for 6DoF grasping, giving rise to a novel framework for joint grasp and motion optimization without needing to decouple grasp selection from trajectory generation.

Motion Planning Robot Manipulation

Active Inference for Robotic Manipulation

no code implementations1 Jun 2022 Tim Schneider, Boris Belousov, Hany Abdulsamad, Jan Peters

Robotic manipulation stands as a largely unsolved problem despite significant advances in robotics and machine learning in the last decades.

Learning Implicit Priors for Motion Optimization

no code implementations11 Apr 2022 Julen Urain, An T. Le, Alexander Lambert, Georgia Chalvatzaki, Byron Boots, Jan Peters

In this paper, we focus on the problem of integrating Energy-based Models (EBM) as guiding priors for motion optimization.

Robot Navigation

Revisiting Model-based Value Expansion

no code implementations28 Mar 2022 Daniel Palenicek, Michael Lutter, Jan Peters

Model-based value expansion methods promise to improve the quality of value function targets and, thereby, the effectiveness of value function learning.

model Model-based Reinforcement Learning

Accelerating Integrated Task and Motion Planning with Neural Feasibility Checking

no code implementations20 Mar 2022 Lei Xu, Tianyu Ren, Georgia Chalvatzaki, Jan Peters

Task and Motion Planning (TAMP) provides a hierarchical framework to handle the sequential nature of manipulation tasks by interleaving a symbolic task planner that generates a possible action sequence, with a motion planner that checks the kinematic feasibility in the geometric world, generating robot trajectories if several constraints are satisfied, e. g., a collision-free trajectory from one state to another.

Motion Planning Task and Motion Planning

Dimensionality Reduction and Prioritized Exploration for Policy Search

no code implementations9 Mar 2022 Marius Memmel, Puze Liu, Davide Tateo, Jan Peters

Black-box policy optimization is a class of reinforcement learning algorithms that explores and updates the policies at the parameter level.

Dimensionality Reduction

An Analysis of Measure-Valued Derivatives for Policy Gradients

no code implementations8 Mar 2022 Joao Carvalho, Jan Peters

This estimator is unbiased, has low variance, and can be used with differentiable and non-differentiable function approximators.

An Adaptive Human Driver Model for Realistic Race Car Simulations

no code implementations3 Mar 2022 Stefan Löckel, Siwei Ju, Maximilian Schaller, Peter van Vliet, Jan Peters

This work contributes to a better understanding and modeling of the human driver, aiming to expedite simulation methods in the modern vehicle development process and potentially supporting automated driving and racing technologies.

Imitation Learning

Integrating Contrastive Learning with Dynamic Models for Reinforcement Learning from Images

1 code implementation2 Mar 2022 Bang You, Oleg Arenz, Youping Chen, Jan Peters

Recent methods for reinforcement learning from images use auxiliary tasks to learn image features that are used by the agent's policy or Q-function.

Contrastive Learning Data Augmentation +3

A Unified Perspective on Value Backup and Exploration in Monte-Carlo Tree Search

no code implementations11 Feb 2022 Tuan Dam, Carlo D'Eramo, Jan Peters, Joni Pajarinen

In this work, we propose two methods for improving the convergence rate and exploration based on a newly introduced backup operator and entropy regularization.

Atari Games Decision Making +2

Distilled Domain Randomization

no code implementations6 Dec 2021 Julien Brosseit, Benedikt Hahner, Fabio Muratore, Michael Gienger, Jan Peters

However, these methods are notorious for the enormous amount of required training data which is prohibitively expensive to collect on real robots.

Deep Reinforcement Learning reinforcement-learning +1

Robot Learning from Randomized Simulations: A Review

no code implementations1 Nov 2021 Fabio Muratore, Fabio Ramos, Greg Turk, Wenhao Yu, Michael Gienger, Jan Peters

The rise of deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data.

Learning Stable Vector Fields on Lie Groups

no code implementations22 Oct 2021 Julen Urain, Davide Tateo, Jan Peters

Learning robot motions from demonstration requires models able to specify vector fields for the full robot pose when the task is defined in operational space.

Motion Generation

Continuous-Time Fitted Value Iteration for Robust Policies

1 code implementation5 Oct 2021 Michael Lutter, Boris Belousov, Shie Mannor, Dieter Fox, Animesh Garg, Jan Peters

Especially for continuous control, solving this differential equation and its extension the Hamilton-Jacobi-Isaacs equation, is important as it yields the optimal policy that achieves the maximum reward on a give task.

continuous-control Continuous Control +1

Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models

1 code implementation5 Oct 2021 Michael Lutter, Jan Peters

Especially for learning dynamics models, these black-box models are not desirable as the underlying principles are well understood and the standard deep networks can learn dynamics that violate these principles.

Metrics Matter: A Closer Look on Self-Paced Reinforcement Learning

no code implementations29 Sep 2021 Pascal Klink, Haoyi Yang, Jan Peters, Joni Pajarinen

Experiments demonstrate that the resulting introduction of metric structure into the curriculum allows for a well-behaving non-parametric version of SPRL that leads to stable learning performance across tasks.

reinforcement-learning Reinforcement Learning +1

Function-Space Variational Inference for Deep Bayesian Classification

no code implementations29 Sep 2021 Jihao Andreas Lin, Joe Watson, Pascal Klink, Jan Peters

Bayesian deep learning approaches assume model parameters to be latent random variables and infer posterior predictive distributions to quantify uncertainty, increase safety and trust, and prevent overconfident and unpredictable behavior.

Adversarial Robustness Classification +3

Boosted Curriculum Reinforcement Learning

no code implementations ICLR 2022 Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen

This approach, which we refer to as boosted curriculum reinforcement learning (BCRL), has the benefit of naturally increasing the representativeness of the functional space by adding a new residual each time a new task is presented.

reinforcement-learning Reinforcement Learning +1

An Empirical Analysis of Measure-Valued Derivatives for Policy Gradients

1 code implementation20 Jul 2021 João Carvalho, Davide Tateo, Fabio Muratore, Jan Peters

This estimator is unbiased, has low variance, and can be used with differentiable and non-differentiable function approximators.

Exploration via Empowerment Gain: Combining Novelty, Surprise and Learning Progress

no code implementations ICML Workshop URL 2021 Philip Becker-Ehmck, Maximilian Karl, Jan Peters, Patrick van der Smagt

We show that while such an agent is still novelty seeking, i. e. interested in exploring the whole state space, it focuses on exploration where its perceived influence is greater, avoiding areas of greater stochasticity or traps that limit its control.

Robust Value Iteration for Continuous Control Tasks

1 code implementation25 May 2021 Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg

The adversarial perturbations encourage a optimal policy that is robust to changes in the dynamics.

continuous-control Continuous Control +3

Evolutionary Training and Abstraction Yields Algorithmic Generalization of Neural Computers

no code implementations17 May 2021 Daniel Tanneberg, Elmar Rueckert, Jan Peters

A key feature of intelligent behaviour is the ability to learn abstract strategies that scale and transfer to unfamiliar problems.

Value Iteration in Continuous Actions, States and Time

1 code implementation10 May 2021 Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg

This algorithm enables dynamic programming for continuous states and actions with a known dynamics model.

Deep Reinforcement Learning

Reinforcement Learning using Guided Observability

no code implementations22 Apr 2021 Stephan Weigand, Pascal Klink, Jan Peters, Joni Pajarinen

Due to recent breakthroughs, reinforcement learning (RL) has demonstrated impressive performance in challenging sequential decision-making problems.

Decision Making MuJoCo +6

Distributionally Robust Trajectory Optimization Under Uncertain Dynamics via Relative Entropy Trust-Regions

no code implementations29 Mar 2021 Hany Abdulsamad, Tim Dorau, Boris Belousov, Jia-Jie Zhu, Jan Peters

Trajectory optimization and model predictive control are essential techniques underpinning advanced robotic applications, ranging from autonomous driving to full-body humanoid control.

Autonomous Driving Humanoid Control +1

SKID RAW: Skill Discovery from Raw Trajectories

no code implementations26 Mar 2021 Daniel Tanneberg, Kai Ploeger, Elmar Rueckert, Jan Peters

Integrating robots in complex everyday environments requires a multitude of problems to be solved.

Variational Inference

Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with Deep Reinforcement Learning

1 code implementation25 Mar 2021 Andrew S. Morgan, Daljeet Nandha, Georgia Chalvatzaki, Carlo D'Eramo, Aaron M. Dollar, Jan Peters

Substantial advancements to model-based reinforcement learning algorithms have been impeded by the model-bias induced by the collected data, which generally hurts performance.

Deep Reinforcement Learning Model-based Reinforcement Learning +3

Advancing Trajectory Optimization with Approximate Inference: Exploration, Covariance Control and Adaptive Risk

1 code implementation10 Mar 2021 Joe Watson, Jan Peters

Discrete-time stochastic optimal control remains a challenging problem for general, nonlinear systems under significant uncertainty, with practical solvers typically relying on the certainty equivalence assumption, replanning and/or extensive regularization.

Extended Tree Search for Robot Task and Motion Planning

1 code implementation9 Mar 2021 Tianyu Ren, Georgia Chalvatzaki, Jan Peters

Moreover, we effectively combine this skeleton space with the resultant motion variable spaces into a single extended decision space.

Decision Making Motion Planning +2

A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning

1 code implementation25 Feb 2021 Pascal Klink, Hany Abdulsamad, Boris Belousov, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Across machine learning, the use of curricula has shown strong empirical potential to improve learning from data by avoiding local optima of training objectives.

reinforcement-learning Reinforcement Learning (RL)

Perspectives on Sim2Real Transfer for Robotics: A Summary of the R:SS 2020 Workshop

no code implementations7 Dec 2020 Sebastian Höfer, Kostas Bekris, Ankur Handa, Juan Camilo Gamboa, Florian Golemo, Melissa Mozifian, Chris Atkeson, Dieter Fox, Ken Goldberg, John Leonard, C. Karen Liu, Jan Peters, Shuran Song, Peter Welinder, Martha White

This report presents the debates, posters, and discussions of the Sim2Real workshop held in conjunction with the 2020 edition of the "Robotics: Science and System" conference.

Convex Optimization with an Interpolation-based Projection and its Application to Deep Learning

no code implementations13 Nov 2020 Riad Akrour, Asma Atamna, Jan Peters

We then propose an optimization algorithm that follows the gradient of the composition of the objective and the projection and prove its convergence for linear objectives and arbitrary convex and Lipschitz domain defining inequality constraints.

A Variational Infinite Mixture for Probabilistic Inverse Dynamics Learning

1 code implementation10 Nov 2020 Hany Abdulsamad, Peter Nickl, Pascal Klink, Jan Peters

Probabilistic regression techniques in control and robotics applications have to fulfill different criteria of data-driven adaptability, computational efficiency, scalability to high dimensions, and the capacity to deal with different modalities in the data.

Computational Efficiency

Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient

no code implementations27 Oct 2020 Samuele Tosatto, João Carvalho, Jan Peters

Off-policy Reinforcement Learning (RL) holds the promise of better data efficiency as it allows sample reuse and potentially enables safe interaction with the environment.

Policy Gradient Methods reinforcement-learning +2

High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards

no code implementations26 Oct 2020 Kai Ploeger, Michael Lutter, Jan Peters

Robots that can learn in the physical world will be important to en-able robots to escape their stiff and pre-programmed movements.

reinforcement-learning Reinforcement Learning (RL) +1

ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by Normalizing Flows

no code implementations25 Oct 2020 Julen Urain, Michelle Ginesi, Davide Tateo, Jan Peters

We introduce ImitationFlow, a novel Deep generative model that allows learning complex globally stable, stochastic, nonlinear dynamics.

A Differentiable Newton Euler Algorithm for Multi-body Model Learning

no code implementations19 Oct 2020 Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters

In this work, we examine a spectrum of hybrid model for the domain of multi-body robot dynamics.

Differentiable Implicit Layers

no code implementations14 Oct 2020 Andreas Look, Simona Doneva, Melih Kandemir, Rainer Gemulla, Jan Peters

In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions.

Model Predictive Control

Active Inference or Control as Inference? A Unifying View

no code implementations1 Oct 2020 Joe Watson, Abraham Imohiosen, Jan Peters

Active inference (AI) is a persuasive theoretical framework from computational neuroscience that seeks to describe action and perception as inference-based computation.

Uncertainty Quantification

Model-Based Quality-Diversity Search for Efficient Robot Learning

no code implementations11 Aug 2020 Leon Keller, Daniel Tanneberg, Svenja Stark, Jan Peters

One approach that was recently used to autonomously generate a repertoire of diverse skills is a novelty based Quality-Diversity~(QD) algorithm.

Diversity Evolutionary Algorithms

Multi-Sensor Next-Best-View Planning as Matroid-Constrained Submodular Maximization

no code implementations4 Jul 2020 Mikko Lauri, Joni Pajarinen, Jan Peters, Simone Frintrop

We consider the problem of creating a 3D model using depth images captured by a team of multiple robots.

Convex Regularization in Monte-Carlo Tree Search

no code implementations1 Jul 2020 Tuan Dam, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Monte-Carlo planning and Reinforcement Learning (RL) are essential to sequential decision making.

Atari Games Decision Making +2

A Deterministic Approximation to Neural SDEs

no code implementations16 Jun 2020 Andreas Look, Melih Kandemir, Barbara Rakitsch, Jan Peters

Our deterministic approximation of the transition kernel is applicable to both training and prediction.

Time Series Analysis Uncertainty Quantification +1

Learning to Play Table Tennis From Scratch using Muscular Robots

no code implementations10 Jun 2020 Dieter Büchler, Simon Guist, Roberto Calandra, Vincent Berenz, Bernhard Schölkopf, Jan Peters

This work is the first to (a) fail-safe learn of a safety-critical dynamic task using anthropomorphic robot arms, (b) learn a precision-demanding problem with a PAM-driven system despite the control challenges and (c) train robots to play table tennis without real balls.

reinforcement-learning Reinforcement Learning (RL)

Continuous Action Reinforcement Learning from a Mixture of Interpretable Experts

1 code implementation10 Jun 2020 Riad Akrour, Davide Tateo, Jan Peters

Reinforcement learning (RL) has demonstrated its ability to solve high dimensional tasks by leveraging non-linear function approximators.

reinforcement-learning Reinforcement Learning +1

Orientation Attentive Robotic Grasp Synthesis with Augmented Grasp Map Representation

1 code implementation9 Jun 2020 Georgia Chalvatzaki, Nikolaos Gkanatsios, Petros Maragos, Jan Peters

Inherent morphological characteristics in objects may offer a wide range of plausible grasping orientations that obfuscates the visual learning of robotic grasping.

Grasp Generation Robotic Grasping

Self-Paced Deep Reinforcement Learning

1 code implementation NeurIPS 2020 Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Curriculum reinforcement learning (CRL) improves the learning speed and stability of an agent by exposing it to a tailored series of tasks throughout learning.

Deep Reinforcement Learning Open-Ended Question Answering +2

Deep Reinforcement Learning with Weighted Q-Learning

no code implementations20 Mar 2020 Andrea Cini, Carlo D'Eramo, Jan Peters, Cesare Alippi

In this regard, Weighted Q-Learning (WQL) effectively reduces bias and shows remarkable results in stochastic environments.

Deep Reinforcement Learning Gaussian Processes +4

Learning to Fly via Deep Model-Based Reinforcement Learning

1 code implementation19 Mar 2020 Philip Becker-Ehmck, Maximilian Karl, Jan Peters, Patrick van der Smagt

Learning to control robots without requiring engineered models has been a long-term goal, promising diverse and novel applications.

Model-based Reinforcement Learning reinforcement-learning +2

Deep Adversarial Reinforcement Learning for Object Disentangling

no code implementations8 Mar 2020 Melvin Laux, Oleg Arenz, Jan Peters, Joni Pajarinen

The ARL framework utilizes an adversary, which is trained to steer the original agent, the protagonist, to challenging states.

Object reinforcement-learning +2

Data-efficient Domain Randomization with Bayesian Optimization

no code implementations5 Mar 2020 Fabio Muratore, Christian Eilers, Michael Gienger, Jan Peters

Domain randomization methods tackle this problem by randomizing the physics simulator (source domain) during training according to a distribution over domain parameters in order to obtain more robust policies that are able to overcome the reality gap.

Bayesian Optimization

Dimensionality Reduction of Movement Primitives in Parameter Space

no code implementations26 Feb 2020 Samuele Tosatto, Jonas Stadtmueller, Jan Peters

The empirical analysis shows that the dimensionality reduction in parameter space is more effective than in configuration space, as it enables the representation of the movements with a significant reduction of parameters.

Dimensionality Reduction

Differential Equations as a Model Prior for Deep Learning and its Applications in Robotics

no code implementations ICLR Workshop DeepDiffEq 2019 Michael Lutter, Jan Peters

Therefore, differential equations are a promising approach to incorporate prior knowledge in machine learning models to obtain robust and interpretable models.

Metric-Based Imitation Learning Between Two Dissimilar Anthropomorphic Robotic Arms

no code implementations25 Feb 2020 Marcus Ebner von Eschenbach, Binyamin Manela, Jan Peters, Armin Biess

The development of autonomous robotic systems that can learn from human demonstrations to imitate a desired behavior - rather than being manually programmed - has huge technological potential.

Deep Reinforcement Learning Imitation Learning +1

An Upper Bound of the Bias of Nadaraya-Watson Kernel Regression under Lipschitz Assumptions

no code implementations29 Jan 2020 Samuele Tosatto, Riad Akrour, Jan Peters

The Nadaraya-Watson kernel estimator is among the most popular nonparameteric regression technique thanks to its simplicity.

regression valid

A Probabilistic Framework for Imitating Human Race Driver Behavior

no code implementations22 Jan 2020 Stefan Löckel, Jan Peters, Peter van Vliet

To approach this problem, we propose Probabilistic Modeling of Driver behavior (ProMoD), a modular framework which splits the task of driver behavior modeling into multiple modules.

Car Racing Imitation Learning

A Nonparametric Off-Policy Policy Gradient

1 code implementation8 Jan 2020 Samuele Tosatto, Joao Carvalho, Hany Abdulsamad, Jan Peters

Reinforcement learning (RL) algorithms still suffer from high sample complexity despite outstanding recent successes.

Density Estimation Policy Gradient Methods +2

MushroomRL: Simplifying Reinforcement Learning Research

2 code implementations4 Jan 2020 Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, Jan Peters

MushroomRL is an open-source Python library developed to simplify the process of implementing and running Reinforcement Learning (RL) experiments.

reinforcement-learning Reinforcement Learning +1

Learning Human Postural Control with Hierarchical Acquisition Functions

no code implementations ICLR 2020 Nils Rottmann, Tjasa Kunavar, Jan Babic, Jan Peters, Elmar Rueckert

In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks.

Bayesian Optimization Memorization

Long-Term Visitation Value for Deep Exploration in Sparse Reward Reinforcement Learning

1 code implementation1 Jan 2020 Simone Parisi, Davide Tateo, Maximilian Hensel, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Empirical results on classic and novel benchmarks show that the proposed approach outperforms existing methods in environments with sparse rewards, especially in the presence of rewards that create suboptimal modes of the objective function.

Benchmarking reinforcement-learning +2

Generalized Mean Estimation in Monte-Carlo Tree Search

no code implementations1 Nov 2019 Tuan Dam, Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Finally, we empirically demonstrate the effectiveness of our method in well-known MDP and POMDP benchmarks, showing significant improvement in performance and convergence speed w. r. t.

Receding Horizon Curiosity

1 code implementation8 Oct 2019 Matthias Schultheis, Boris Belousov, Hany Abdulsamad, Jan Peters

Sample-efficient exploration is crucial not only for discovering rewarding experiences but also for adapting to environment changes in a task-agnostic fashion.

Efficient Exploration Experimental Design +1