no code implementations • 27 Nov 2023 • Vignesh Prasad, Lea Heitlinger, Dorothea Koert, Ruth Stock-Homburg, Jan Peters, Georgia Chalvatzaki
The generated robot motions are further adapted with Inverse Kinematics to ensure the desired physical proximity with a human, combining the ease of joint space learning and accurate task space reachability.
no code implementations • 19 Nov 2023 • Ahmed Hendawy, Jan Peters, Carlo D'Eramo
Multi-Task Reinforcement Learning (MTRL) tackles the long-standing problem of endowing agents with skills that generalize across a variety of problems.
no code implementations • 13 Nov 2023 • Luca Lach, Robert Haschke, Davide Tateo, Jan Peters, Helge Ritter, Júlia Borràs, Carme Torras
The advent of tactile sensors in robotics has sparked many ideas on how robots can leverage direct contact measurements of their environment interactions to improve manipulation tasks.
no code implementations • 7 Nov 2023 • Firas Al-Hafez, Guoping Zhao, Jan Peters, Davide Tateo
Stateful policies play an important role in reinforcement learning, such as handling partially observable environments, enhancing robustness, or imposing an inductive bias directly into the policy structure.
2 code implementations • 4 Nov 2023 • Firas Al-Hafez, Guoping Zhao, Jan Peters, Davide Tateo
Imitation Learning (IL) holds great promise for enabling agile locomotion in embodied agents.
no code implementations • 3 Nov 2023 • Gabriele Tiboni, Pascal Klink, Jan Peters, Tatiana Tommasi, Carlo D'Eramo, Georgia Chalvatzaki
Varying dynamics parameters in simulation is a popular Domain Randomization (DR) approach for overcoming the reality gap in Reinforcement Learning (RL).
no code implementations • 3 Nov 2023 • Aryaman Reddi, Maximilian Tölle, Jan Peters, Georgia Chalvatzaki, Carlo D'Eramo
To this end, Robust Adversarial Reinforcement Learning (RARL) trains a protagonist against destabilizing forces exercised by an adversary in a competitive zero-sum Markov game, whose optimal solution, i. e., rational strategy, corresponds to a Nash equilibrium.
no code implementations • 25 Sep 2023 • Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen
In this work, we focus on framing curricula as interpolations between task distributions, which has previously been shown to be a viable approach to CRL.
no code implementations • 25 Sep 2023 • Pascal Klink, Florian Wolf, Kai Ploeger, Jan Peters, Joni Pajarinen
Reinforcement Learning (RL) allows learning non-trivial robot control laws purely from data.
no code implementations • 15 Sep 2023 • Andreas Look, Melih Kandemir, Barbara Rakitsch, Jan Peters
Many real-world dynamical systems can be described as State-Space Models (SSMs).
no code implementations • 12 Aug 2023 • Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters
We study the problem from a model-based Bayesian reinforcement learning perspective, where the goal is to learn the posterior distribution over value functions induced by parameter (epistemic) uncertainty of the Markov decision process.
no code implementations • 3 Aug 2023 • Joao Carvalho, An T. Le, Mark Baierl, Dorothea Koert, Jan Peters
Learning priors on trajectory distributions can help accelerate robot motion planning optimization.
no code implementations • 12 Jul 2023 • Jihao Andreas Lin, Joe Watson, Pascal Klink, Jan Peters
Bayesian deep learning approaches assume model parameters to be latent random variables and infer posterior distributions to quantify uncertainty, increase safety and trust, and prevent overconfident and unpredictable behavior.
1 code implementation • 2 May 2023 • Andreas Look, Melih Kandemir, Barbara Rakitsch, Jan Peters
Furthermore, we propose structured approximations to the covariance matrices of the Gaussian components in order to scale up to systems with many agents.
no code implementations • 8 Mar 2023 • Johanna Bethge, Maik Pfefferkorn, Alexander Rose, Jan Peters, Rolf Findeisen
We propose a model predictive control approach for autonomous vehicles that exploits learned Gaussian processes for predicting human driving behavior.
1 code implementation • 7 Mar 2023 • Daniel Palenicek, Michael Lutter, Joao Carvalho, Jan Peters
Therefore, we conclude that the limitation of model-based value expansion methods is not the model accuracy of the learned models.
1 code implementation • 1 Mar 2023 • Firas Al-Hafez, Davide Tateo, Oleg Arenz, Guoping Zhao, Jan Peters
Recent methods for imitation learning directly learn a $Q$-function using an implicit reward formulation rather than an explicit reward function.
no code implementations • 25 Feb 2023 • Shangding Gu, Alap Kshirsagar, Yali Du, Guang Chen, Jan Peters, Alois Knoll
Deployment of Reinforcement Learning (RL) algorithms for robotics applications in the real world requires ensuring the safety of the robot and its environment.
1 code implementation • 24 Feb 2023 • Carlos E. Luis, Alessandro G. Bottero, Julia Vinogradska, Felix Berkenkamp, Jan Peters
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
1 code implementation • 11 Jan 2023 • Piotr Kicki, Puze Liu, Davide Tateo, Haitham Bou-Ammar, Krzysztof Walas, Piotr Skrzypczyński, Jan Peters
Motion planning is a mature area of research in robotics with many well-established methods based on optimization or sampling the state space, suitable for solving kinematic motion planning.
1 code implementation • 9 Dec 2022 • Alessandro G. Bottero, Carlos E. Luis, Julia Vinogradska, Felix Berkenkamp, Jan Peters
We consider a sequential decision making task where we are not allowed to evaluate parameters that violate an a priori unknown (safety) constraint.
no code implementations • 4 Dec 2022 • An T. Le, Kay Hansel, Jan Peters, Georgia Chalvatzaki
We present hierarchical policy blending as optimal transport (HiPBOT).
no code implementations • 29 Nov 2022 • Hamish Flynn, David Reeb, Melih Kandemir, Jan Peters
On the one hand, we found that PAC-Bayes bounds are a useful tool for designing offline bandit algorithms with performance guarantees.
1 code implementation • 26 Nov 2022 • Max Siebenborn, Boris Belousov, Junning Huang, Jan Peters
On the other hand, the proposed Decision LSTM is able to achieve expert-level performance on these tasks, in addition to learning a swing-up controller on the real system.
no code implementations • 2 Nov 2022 • Hany Abdulsamad, Peter Nickl, Pascal Klink, Jan Peters
We derive two efficient variational inference techniques to learn these representations and highlight the advantages of hierarchical infinite local regression models, such as dealing with non-smooth functions, mitigating catastrophic forgetting, and enabling parameter sharing and fast predictions.
no code implementations • 23 Oct 2022 • Tim Schneider, Boris Belousov, Georgia Chalvatzaki, Diego Romeres, Devesh K. Jha, Jan Peters
Robotic manipulation stands as a largely unsolved problem despite significant advances in robotics and machine learning in recent years.
no code implementations • 22 Oct 2022 • Vignesh Prasad, Dorothea Koert, Ruth Stock-Homburg, Jan Peters, Georgia Chalvatzaki
Modeling interaction dynamics to generate robot trajectories that enable a robot to adapt and react to a human's actions and intentions is critical for efficient and effective collaborative Human-Robot Interactions (HRI).
no code implementations • 14 Oct 2022 • Kay Hansel, Julen Urain, Jan Peters, Georgia Chalvatzaki
To combine the benefits of reactive policies and planning, we propose a hierarchical motion generation method.
1 code implementation • 7 Oct 2022 • Joe Watson, Jan Peters
Monte Carlo methods have become increasingly relevant for control of non-differentiable systems, approximate dynamics models and learning from data.
no code implementations • 27 Sep 2022 • Puze Liu, Kuo Zhang, Davide Tateo, Snehal Jauhri, Zhiyuan Hu, Jan Peters, Georgia Chalvatzaki
Our proposed approach achieves state-of-the-art performance in simulated high-dimensional and dynamic tasks while avoiding collisions with the environment.
no code implementations • 12 Sep 2022 • Bang You, Jingming Xie, Youping Chen, Jan Peters, Oleg Arenz
Recent works based on state-visitation counts, curiosity and entropy-maximization generate intrinsic reward signals to motivate the agent to visit novel states for exploration.
no code implementations • 10 Sep 2022 • Alexander I. Cowen-Rivers, Philip John Gorinski, Aivar Sootla, Asif Khan, Liu Furui, Jun Wang, Jan Peters, Haitham Bou Ammar
Optimizing combinatorial structures is core to many real-world problems, such as those encountered in life sciences.
1 code implementation • 8 Sep 2022 • Julen Urain, Niklas Funk, Jan Peters, Georgia Chalvatzaki
In this work, we focus on learning SE(3) diffusion models for 6DoF grasping, giving rise to a novel framework for joint grasp and motion optimization without needing to decouple grasp selection from trajectory generation.
no code implementations • 1 Jun 2022 • Tim Schneider, Boris Belousov, Hany Abdulsamad, Jan Peters
Robotic manipulation stands as a largely unsolved problem despite significant advances in robotics and machine learning in the last decades.
no code implementations • 11 Apr 2022 • Julen Urain, An T. Le, Alexander Lambert, Georgia Chalvatzaki, Byron Boots, Jan Peters
In this paper, we focus on the problem of integrating Energy-based Models (EBM) as guiding priors for motion optimization.
no code implementations • 28 Mar 2022 • Daniel Palenicek, Michael Lutter, Jan Peters
Model-based value expansion methods promise to improve the quality of value function targets and, thereby, the effectiveness of value function learning.
no code implementations • 20 Mar 2022 • Lei Xu, Tianyu Ren, Georgia Chalvatzaki, Jan Peters
Task and Motion Planning (TAMP) provides a hierarchical framework to handle the sequential nature of manipulation tasks by interleaving a symbolic task planner that generates a possible action sequence, with a motion planner that checks the kinematic feasibility in the geometric world, generating robot trajectories if several constraints are satisfied, e. g., a collision-free trajectory from one state to another.
no code implementations • 9 Mar 2022 • Puze Liu, Kuo Zhang, Davide Tateo, Snehal Jauhri, Jan Peters, Georgia Chalvatzaki
Autonomous robots should operate in real-world dynamic environments and collaborate with humans in tight spaces.
no code implementations • 9 Mar 2022 • Marius Memmel, Puze Liu, Davide Tateo, Jan Peters
Black-box policy optimization is a class of reinforcement learning algorithms that explores and updates the policies at the parameter level.
no code implementations • 8 Mar 2022 • Joao Carvalho, Jan Peters
This estimator is unbiased, has low variance, and can be used with differentiable and non-differentiable function approximators.
no code implementations • 8 Mar 2022 • Niklas Funk, Svenja Menzenbach, Georgia Chalvatzaki, Jan Peters
Robot assembly discovery is a challenging problem that lives at the intersection of resource allocation and motion planning.
no code implementations • 8 Mar 2022 • Snehal Jauhri, Jan Peters, Georgia Chalvatzaki
Finally, we zero-transfer our learned 6D fetching policy with BHyRL to our MM robot TIAGo++.
no code implementations • 7 Mar 2022 • Hamish Flynn, David Reeb, Melih Kandemir, Jan Peters
We present a PAC-Bayesian analysis of lifelong learning.
no code implementations • 3 Mar 2022 • Stefan Löckel, Siwei Ju, Maximilian Schaller, Peter van Vliet, Jan Peters
This work contributes to a better understanding and modeling of the human driver, aiming to expedite simulation methods in the modern vehicle development process and potentially supporting automated driving and racing technologies.
1 code implementation • 2 Mar 2022 • Bang You, Oleg Arenz, Youping Chen, Jan Peters
Recent methods for reinforcement learning from images use auxiliary tasks to learn image features that are used by the agent's policy or Q-function.
no code implementations • 11 Feb 2022 • Tuan Dam, Carlo D'Eramo, Jan Peters, Joni Pajarinen
In this work, we propose two methods for improving the convergence rate and exploration based on a newly introduced backup operator and entropy regularization.
no code implementations • 6 Dec 2021 • Julien Brosseit, Benedikt Hahner, Fabio Muratore, Michael Gienger, Jan Peters
However, these methods are notorious for the enormous amount of required training data which is prohibitively expensive to collect on real robots.
no code implementations • 11 Nov 2021 • Hany Abdulsamad, Jan Peters
Optimal control of general nonlinear systems is a central challenge in automation.
no code implementations • 1 Nov 2021 • Fabio Muratore, Fabio Ramos, Greg Turk, Wenhao Yu, Michael Gienger, Jan Peters
The rise of deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data.
no code implementations • 22 Oct 2021 • Julen Urain, Davide Tateo, Jan Peters
Learning robot motions from demonstration requires models able to specify vector fields for the full robot pose when the task is defined in operational space.
1 code implementation • 5 Oct 2021 • Michael Lutter, Boris Belousov, Shie Mannor, Dieter Fox, Animesh Garg, Jan Peters
Especially for continuous control, solving this differential equation and its extension the Hamilton-Jacobi-Isaacs equation, is important as it yields the optimal policy that achieves the maximum reward on a give task.
1 code implementation • 5 Oct 2021 • Michael Lutter, Jan Peters
Especially for learning dynamics models, these black-box models are not desirable as the underlying principles are well understood and the standard deep networks can learn dynamics that violate these principles.
no code implementations • 29 Sep 2021 • Jihao Andreas Lin, Joe Watson, Pascal Klink, Jan Peters
Bayesian deep learning approaches assume model parameters to be latent random variables and infer posterior predictive distributions to quantify uncertainty, increase safety and trust, and prevent overconfident and unpredictable behavior.
no code implementations • ICLR 2022 • Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen
This approach, which we refer to as boosted curriculum reinforcement learning (BCRL), has the benefit of naturally increasing the representativeness of the functional space by adding a new residual each time a new task is presented.
no code implementations • 29 Sep 2021 • Pascal Klink, Haoyi Yang, Jan Peters, Joni Pajarinen
Experiments demonstrate that the resulting introduction of metric structure into the curriculum allows for a well-behaving non-parametric version of SPRL that leads to stable learning performance across tasks.
1 code implementation • 20 Jul 2021 • João Carvalho, Davide Tateo, Fabio Muratore, Jan Peters
This estimator is unbiased, has low variance, and can be used with differentiable and non-differentiable function approximators.
no code implementations • ICML Workshop URL 2021 • Philip Becker-Ehmck, Maximilian Karl, Jan Peters, Patrick van der Smagt
We show that while such an agent is still novelty seeking, i. e. interested in exploring the whole state space, it focuses on exploration where its perceived influence is greater, avoiding areas of greater stochasticity or traps that limit its control.
2 code implementations • 7 Jun 2021 • Antoine Grosnit, Rasul Tutunov, Alexandre Max Maraval, Ryan-Rhys Griffiths, Alexander I. Cowen-Rivers, Lin Yang, Lin Zhu, Wenlong Lyu, Zhitang Chen, Jun Wang, Jan Peters, Haitham Bou-Ammar
We introduce a method combining variational autoencoders (VAEs) and deep metric learning to perform Bayesian optimisation (BO) over high-dimensional and structured input spaces.
Ranked #1 on
Molecular Graph Generation
on ZINC
1 code implementation • 25 May 2021 • Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg
The adversarial perturbations encourage a optimal policy that is robust to changes in the dynamics.
1 code implementation • 17 May 2021 • Joe Watson, Hany Abdulsamad, Rolf Findeisen, Jan Peters
Optimal control under uncertainty is a prevailing challenge for many reasons.
no code implementations • 17 May 2021 • Daniel Tanneberg, Elmar Rueckert, Jan Peters
A key feature of intelligent behaviour is the ability to learn abstract strategies that scale and transfer to unfamiliar problems.
no code implementations • 11 May 2021 • Julen Urain, Anqi Li, Puze Liu, Carlo D'Eramo, Jan Peters
Reactive motion generation problems are usually solved by computing actions as a sum of policies.
1 code implementation • 10 May 2021 • Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg
This algorithm enables dynamic programming for continuous states and actions with a known dynamics model.
no code implementations • 22 Apr 2021 • Stephan Weigand, Pascal Klink, Jan Peters, Joni Pajarinen
Due to recent breakthroughs, reinforcement learning (RL) has demonstrated impressive performance in challenging sequential decision-making problems.
no code implementations • 29 Mar 2021 • Hany Abdulsamad, Tim Dorau, Boris Belousov, Jia-Jie Zhu, Jan Peters
Trajectory optimization and model predictive control are essential techniques underpinning advanced robotic applications, ranging from autonomous driving to full-body humanoid control.
no code implementations • 26 Mar 2021 • Daniel Tanneberg, Kai Ploeger, Elmar Rueckert, Jan Peters
Integrating robots in complex everyday environments requires a multitude of problems to be solved.
1 code implementation • 25 Mar 2021 • Andrew S. Morgan, Daljeet Nandha, Georgia Chalvatzaki, Carlo D'Eramo, Aaron M. Dollar, Jan Peters
Substantial advancements to model-based reinforcement learning algorithms have been impeded by the model-bias induced by the collected data, which generally hurts performance.
Model-based Reinforcement Learning
reinforcement-learning
+1
1 code implementation • 10 Mar 2021 • Joe Watson, Jan Peters
Discrete-time stochastic optimal control remains a challenging problem for general, nonlinear systems under significant uncertainty, with practical solvers typically relying on the certainty equivalence assumption, replanning and/or extensive regularization.
1 code implementation • 9 Mar 2021 • Tianyu Ren, Georgia Chalvatzaki, Jan Peters
Moreover, we effectively combine this skeleton space with the resultant motion variable spaces into a single extended decision space.
1 code implementation • 25 Feb 2021 • Pascal Klink, Hany Abdulsamad, Boris Belousov, Carlo D'Eramo, Jan Peters, Joni Pajarinen
Across machine learning, the use of curricula has shown strong empirical potential to improve learning from data by avoiding local optima of training objectives.
no code implementations • 11 Dec 2020 • Julen Urain, Davide Tateo, Tianyu Ren, Jan Peters
We present a new family of deep neural network-based dynamic systems.
no code implementations • 7 Dec 2020 • Sebastian Höfer, Kostas Bekris, Ankur Handa, Juan Camilo Gamboa, Florian Golemo, Melissa Mozifian, Chris Atkeson, Dieter Fox, Ken Goldberg, John Leonard, C. Karen Liu, Jan Peters, Shuran Song, Peter Welinder, Martha White
This report presents the debates, posters, and discussions of the Sim2Real workshop held in conjunction with the 2020 edition of the "Robotics: Science and System" conference.
3 code implementations • 7 Dec 2020 • Alexander I. Cowen-Rivers, Wenlong Lyu, Rasul Tutunov, Zhi Wang, Antoine Grosnit, Ryan Rhys Griffiths, Alexandre Max Maraval, Hao Jianye, Jun Wang, Jan Peters, Haitham Bou Ammar
Our results on the Bayesmark benchmark indicate that heteroscedasticity and non-stationarity pose significant challenges for black-box optimisers.
Ranked #1 on
Hyperparameter Optimization
on Bayesmark
no code implementations • pproximateinference AABI Symposium 2021 • Joe Watson, Jihao Andreas Lin, Pascal Klink, Jan Peters
Neural linear models (NLM) and Gaussian processes (GP) are both examples of Bayesian linear regression on rich feature spaces.
no code implementations • 13 Nov 2020 • Riad Akrour, Asma Atamna, Jan Peters
We then propose an optimization algorithm that follows the gradient of the composition of the objective and the projection and prove its convergence for linear objectives and arbitrary convex and Lipschitz domain defining inequality constraints.
1 code implementation • 10 Nov 2020 • Hany Abdulsamad, Peter Nickl, Pascal Klink, Jan Peters
Probabilistic regression techniques in control and robotics applications have to fulfill different criteria of data-driven adaptability, computational efficiency, scalability to high dimensions, and the capacity to deal with different modalities in the data.
no code implementations • 3 Nov 2020 • Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters
A limitation of model-based reinforcement learning (MBRL) is the exploitation of errors in the learned models.
Model-based Reinforcement Learning
reinforcement-learning
+1
no code implementations • 27 Oct 2020 • Samuele Tosatto, João Carvalho, Jan Peters
Off-policy Reinforcement Learning (RL) holds the promise of better data efficiency as it allows sample reuse and potentially enables safe interaction with the environment.
no code implementations • 26 Oct 2020 • Kai Ploeger, Michael Lutter, Jan Peters
Robots that can learn in the physical world will be important to en-able robots to escape their stiff and pre-programmed movements.
no code implementations • 26 Oct 2020 • Samuele Tosatto, Georgia Chalvatzaki, Jan Peters
Parameterized movement primitives have been extensively used for imitation learning of robotic tasks.
no code implementations • 25 Oct 2020 • Julen Urain, Michelle Ginesi, Davide Tateo, Jan Peters
We introduce ImitationFlow, a novel Deep generative model that allows learning complex globally stable, stochastic, nonlinear dynamics.
no code implementations • 19 Oct 2020 • Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters
In this work, we examine a spectrum of hybrid model for the domain of multi-body robot dynamics.
no code implementations • 14 Oct 2020 • Andreas Look, Simona Doneva, Melih Kandemir, Rainer Gemulla, Jan Peters
In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions.
no code implementations • 1 Oct 2020 • Joe Watson, Abraham Imohiosen, Jan Peters
Active inference (AI) is a persuasive theoretical framework from computational neuroscience that seeks to describe action and perception as inference-based computation.
no code implementations • 11 Aug 2020 • Leon Keller, Daniel Tanneberg, Svenja Stark, Jan Peters
One approach that was recently used to autonomously generate a repertoire of diverse skills is a novelty based Quality-Diversity~(QD) algorithm.
no code implementations • 4 Jul 2020 • Mikko Lauri, Joni Pajarinen, Jan Peters, Simone Frintrop
We consider the problem of creating a 3D model using depth images captured by a team of multiple robots.
no code implementations • 1 Jul 2020 • Tuan Dam, Carlo D'Eramo, Jan Peters, Joni Pajarinen
Monte-Carlo planning and Reinforcement Learning (RL) are essential to sequential decision making.
no code implementations • 16 Jun 2020 • Andreas Look, Melih Kandemir, Barbara Rakitsch, Jan Peters
Our deterministic approximation of the transition kernel is applicable to both training and prediction.
no code implementations • 10 Jun 2020 • Dieter Büchler, Simon Guist, Roberto Calandra, Vincent Berenz, Bernhard Schölkopf, Jan Peters
This work is the first to (a) fail-safe learn of a safety-critical dynamic task using anthropomorphic robot arms, (b) learn a precision-demanding problem with a PAM-driven system despite the control challenges and (c) train robots to play table tennis without real balls.
1 code implementation • 10 Jun 2020 • Riad Akrour, Davide Tateo, Jan Peters
Reinforcement learning (RL) has demonstrated its ability to solve high dimensional tasks by leveraging non-linear function approximators.
1 code implementation • 9 Jun 2020 • Georgia Chalvatzaki, Nikolaos Gkanatsios, Petros Maragos, Jan Peters
Inherent morphological characteristics in objects may offer a wide range of plausible grasping orientations that obfuscates the visual learning of robotic grasping.
no code implementations • L4DC 2020 • Hany Abdulsamad, Jan Peters
The control of nonlinear dynamical systems remains a major challenge for autonomous agents.
1 code implementation • ICLR 2020 • Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, Jan Peters
We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning.
1 code implementation • NeurIPS 2020 • Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen
Curriculum reinforcement learning (CRL) improves the learning speed and stability of an agent by exposing it to a tailored series of tasks throughout learning.
no code implementations • 20 Mar 2020 • Andrea Cini, Carlo D'Eramo, Jan Peters, Cesare Alippi
In this regard, Weighted Q-Learning (WQL) effectively reduces bias and shows remarkable results in stochastic environments.
1 code implementation • 19 Mar 2020 • Philip Becker-Ehmck, Maximilian Karl, Jan Peters, Patrick van der Smagt
Learning to control robots without requiring engineered models has been a long-term goal, promising diverse and novel applications.
Model-based Reinforcement Learning
reinforcement-learning
+1
no code implementations • 8 Mar 2020 • Melvin Laux, Oleg Arenz, Jan Peters, Joni Pajarinen
The ARL framework utilizes an adversary, which is trained to steer the original agent, the protagonist, to challenging states.
no code implementations • 5 Mar 2020 • Fabio Muratore, Christian Eilers, Michael Gienger, Jan Peters
Domain randomization methods tackle this problem by randomizing the physics simulator (source domain) during training according to a distribution over domain parameters in order to obtain more robust policies that are able to overcome the reality gap.
no code implementations • ICLR Workshop DeepDiffEq 2019 • Michael Lutter, Jan Peters
Therefore, differential equations are a promising approach to incorporate prior knowledge in machine learning models to obtain robust and interpretable models.
no code implementations • 26 Feb 2020 • Samuele Tosatto, Jonas Stadtmueller, Jan Peters
The empirical analysis shows that the dimensionality reduction in parameter space is more effective than in configuration space, as it enables the representation of the movements with a significant reduction of parameters.
no code implementations • 25 Feb 2020 • Marcus Ebner von Eschenbach, Binyamin Manela, Jan Peters, Armin Biess
The development of autonomous robotic systems that can learn from human demonstrations to imitate a desired behavior - rather than being manually programmed - has huge technological potential.
no code implementations • 29 Jan 2020 • Samuele Tosatto, Riad Akrour, Jan Peters
The Nadaraya-Watson kernel estimator is among the most popular nonparameteric regression technique thanks to its simplicity.
no code implementations • 22 Jan 2020 • Stefan Löckel, Jan Peters, Peter van Vliet
To approach this problem, we propose Probabilistic Modeling of Driver behavior (ProMoD), a modular framework which splits the task of driver behavior modeling into multiple modules.
1 code implementation • 8 Jan 2020 • Samuele Tosatto, Joao Carvalho, Hany Abdulsamad, Jan Peters
Reinforcement learning (RL) algorithms still suffer from high sample complexity despite outstanding recent successes.
2 code implementations • 4 Jan 2020 • Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, Jan Peters
MushroomRL is an open-source Python library developed to simplify the process of implementing and running Reinforcement Learning (RL) experiments.
1 code implementation • 1 Jan 2020 • Simone Parisi, Davide Tateo, Maximilian Hensel, Carlo D'Eramo, Jan Peters, Joni Pajarinen
Empirical results on classic and novel benchmarks show that the proposed approach outperforms existing methods in environments with sparse rewards, especially in the presence of rewards that create suboptimal modes of the objective function.
no code implementations • ICLR 2020 • Nils Rottmann, Tjasa Kunavar, Jan Babic, Jan Peters, Elmar Rueckert
In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks.
no code implementations • 1 Nov 2019 • Tuan Dam, Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen
Finally, we empirically demonstrate the effectiveness of our method in well-known MDP and POMDP benchmarks, showing significant improvement in performance and convergence speed w. r. t.
no code implementations • 30 Oct 2019 • Daniel Tanneberg, Elmar Rueckert, Jan Peters
A key feature of intelligent behavior is the ability to learn abstract strategies that transfer to unfamiliar problems.
1 code implementation • 8 Oct 2019 • Matthias Schultheis, Boris Belousov, Hany Abdulsamad, Jan Peters
Sample-efficient exploration is crucial not only for discovering rewarding experiences but also for adapting to environment changes in a task-agnostic fashion.
1 code implementation • Conference on Robot Learning (CoRL) 2019 2019 • Joe Watson, Hany Abdulsamad, Jan Peters
Optimal control of stochastic nonlinear dynamical systems is a major challenge in the domain of robot learning.
1 code implementation • 7 Oct 2019 • Pascal Klink, Hany Abdulsamad, Boris Belousov, Jan Peters
Generalization and adaptation of learned skills to novel situations is a core requirement for intelligent autonomous robots.
no code implementations • 25 Sep 2019 • Daniel Tanneberg, Elmar Rueckert, Jan Peters
A key feature of intelligent behavior is the ability to learn abstract strategies that transfer to unfamiliar problems.
no code implementations • 13 Sep 2019 • Michael Lutter, Boris Belousov, Kim Listmann, Debora Clever, Jan Peters
The corresponding optimal value function is learned end-to-end by embedding a deep differential network in the Hamilton-Jacobi-Bellmann differential equation and minimizing the error of this equality while simultaneously decreasing the discounting from short- to far-sighted to enable the learning.
1 code implementation • 9 Sep 2019 • Sebastian Gomez-Gonzalez, Sergey Prokudin, Bernhard Scholkopf, Jan Peters
Our method uses encoder and decoder deep networks that maps complete or partial trajectories to a Gaussian distributed latent space and back, allowing for fast inference of the future values of a trajectory given previous observations.
no code implementations • 15 Aug 2019 • Zhang-Wei Hong, Joni Pajarinen, Jan Peters
Model-based Reinforcement Learning (MBRL) allows data-efficient learning which is required in real world applications such as robotics.
no code implementations • 11 Aug 2019 • Svenja Stark, Jan Peters, Elmar Rueckert
Accordingly, for learning a new task, time could be saved by restricting the parameter search space by initializing it with the solution of a similar task.
3 code implementations • ICLR 2019 • Michael Lutter, Christian Ritter, Jan Peters
DeLaN can learn the equations of motion of a mechanical system (i. e., system dynamics) with a deep network efficiently while ensuring physical plausibility.
no code implementations • 10 Jul 2019 • Fabio Muratore, Michael Gienger, Jan Peters
Optimizing a policy on a slightly faulty simulator can easily lead to the maximization of the `Simulation Optimization Bias` (SOB).
1 code implementation • 10 Jul 2019 • Michael Lutter, Kim Listmann, Jan Peters
Applying Deep Learning to control has a lot of potential for enabling the intelligent design of robot control laws.
no code implementations • 6 Jul 2019 • Boris Belousov, Jan Peters
An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration.
no code implementations • 4 Jul 2019 • Susanne Trick, Dorothea Koert, Jan Peters, Constantin Rothkopf
The approach is evaluated in a collaborative human-robot interaction task with a 7-DoF robot arm.
no code implementations • 21 Jun 2019 • David Nass, Boris Belousov, Jan Peters
With the increasing pace of automation, modern robotic systems need to act in stochastic, non-stationary, partially observable environments.
no code implementations • 29 May 2019 • Philip Becker-Ehmck, Jan Peters, Patrick van der Smagt
System identification of complex and nonlinear systems is a central problem for model predictive control and model-based reinforcement learning.
no code implementations • 28 Apr 2019 • Zinan Liu, Kai Ploeger, Svenja Stark, Elmar Rueckert, Jan Peters
In quadruped gait learning, policy search methods that scale high dimensional continuous action spaces are commonly used.
no code implementations • 7 Apr 2019 • Dieter Büchler, Roberto Calandra, Jan Peters
High-speed and high-acceleration movements are inherently hard to control.
1 code implementation • 26 Feb 2019 • Mikko Lauri, Joni Pajarinen, Jan Peters
Decentralized policies for information gathering are required when multiple autonomous agents are deployed to collect data about a phenomenon of interest without the ability to communicate.
no code implementations • 17 Feb 2019 • Kristian Kersting, Jan Peters, Constantin Rothkopf
The Federal Government of Germany aims to boost the research in the field of Artificial Intelligence (AI).
1 code implementation • 14 Feb 2019 • Aditya Bhatt, Daniel Palenicek, Boris Belousov, Max Argus, Artemij Amiranashvili, Thomas Brox, Jan Peters
Sample efficiency is a crucial problem in deep reinforcement learning.
1 code implementation • 12 Feb 2019 • Diego Agudelo-España, Sebastian Gomez-Gonzalez, Stefan Bauer, Bernhard Schölkopf, Jan Peters
Online detection of instantaneous changes in the generative process of a data sequence generally focuses on retrospective inference of such change points without considering their future occurrences.
no code implementations • 7 Feb 2019 • Joni Pajarinen, Hong Linh Thai, Riad Akrour, Jan Peters, Gerhard Neumann
Trust-region methods have yielded state-of-the-art results in policy search.
3 code implementations • ICML 2018 • Paavo Parmas, Carl Edward Rasmussen, Jan Peters, Kenji Doya
Previously, the exploding gradient problem has been explained to be central in deep learning and model-based reinforcement learning, because it causes numerical issues and instability in optimization.
Model-based Reinforcement Learning
reinforcement-learning
+1
1 code implementation • 19 Dec 2018 • Simone Parisi, Voot Tangkaratt, Jan Peters, Mohammad Emtiyaz Khan
Actor-critic methods can achieve incredible performance on difficult reinforcement learning problems, but they are also prone to instability.
no code implementations • 16 Nov 2018 • Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J. Andrew Bagnell, Pieter Abbeel, Jan Peters
This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning.
1 code implementation • 31 Aug 2018 • Sebastian Gomez-Gonzalez, Gerhard Neumann, Bernhard Schölkopf, Jan Peters
However, to be able to capture variability and correlations between different joints, a probabilistic movement primitive requires the estimation of a larger number of parameters compared to their deterministic counterparts, that focus on modeling only the mean behavior.
1 code implementation • 29 May 2018 • Maximilian Sieb, Matthias Schultheis, Sebastian Szelag, Rudolf Lioutikov, Jan Peters
Using movement primitive libraries is an effective means to enable robots to solve more complex tasks.
no code implementations • 1 Mar 2018 • Adrian Šošić, Elmar Rueckert, Jan Peters, Abdelhak M. Zoubir, Heinz Koeppl
Advances in the field of inverse reinforcement learning (IRL) have led to sophisticated inference frameworks that relax the original modeling assumption of observing an agent behavior that reflects only a single intention.
no code implementations • 22 Feb 2018 • Daniel Tanneberg, Jan Peters, Elmar Rueckert
By using learning signals which mimic the intrinsic motivation signalcognitive dissonance in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds.
1 code implementation • 29 Dec 2017 • Boris Belousov, Jan Peters
We carry out asymptotic analysis of the solutions for different values of $\alpha$ and demonstrate the effects of using different divergence functions on a multi-armed bandit problem and on common standard reinforcement learning problems.
no code implementations • ICML 2017 • Riad Akrour, Dmitry Sorokin, Jan Peters, Gerhard Neumann
Bayesian optimization is renowned for its sample efficiency but its application to higher dimensional tasks is impeded by its focus on global optimization.
no code implementations • 10 Nov 2016 • Voot Tangkaratt, Herke van Hoof, Simone Parisi, Gerhard Neumann, Jan Peters, Masashi Sugiyama
A naive application of unsupervised dimensionality reduction methods to the context variables, such as principal component analysis, is insufficient as task-relevant input may be ignored.
no code implementations • 29 Jun 2016 • Riad Akrour, Abbas Abdolmaleki, Hany Abdulsamad, Jan Peters, Gerhard Neumann
In order to show the monotonic improvement of our algorithm, we additionally conduct a theoretical analysis of our policy update scheme to derive a lower bound of the change in policy return between successive iterations.
1 code implementation • 24 Feb 2014 • Roberto Calandra, Jan Peters, Carl Edward Rasmussen, Marc Peter Deisenroth
This feature space is often learned in an unsupervised way, which might lead to data representations that are not useful for the overall regression task.
no code implementations • 2 Jul 2013 • Marc Peter Deisenroth, Peter Englert, Jan Peters, Dieter Fox
Learning policies that generalize across multiple tasks is an important and challenging research topic in reinforcement learning and robotics.