Search Results for author: Jan Peters

Found 110 papers, 36 papers with code

Active Inference for Robotic Manipulation

no code implementations1 Jun 2022 Tim Schneider, Boris Belousov, Hany Abdulsamad, Jan Peters

Robotic manipulation stands as a largely unsolved problem despite significant advances in robotics and machine learning in the last decades.

Learning Implicit Priors for Motion Optimization

no code implementations11 Apr 2022 Alexander Lambert, An T. Le, Julen Urain, Georgia Chalvatzaki, Byron Boots, Jan Peters

In this paper, we focus on the problem of integrating Energy-based Models (EBM) as guiding priors for motion optimization.

Revisiting Model-based Value Expansion

no code implementations28 Mar 2022 Daniel Palenicek, Michael Lutter, Jan Peters

Model-based value expansion methods promise to improve the quality of value function targets and, thereby, the effectiveness of value function learning.

Model-based Reinforcement Learning

Accelerating Integrated Task and Motion Planning with Neural Feasibility Checking

no code implementations20 Mar 2022 Lei Xu, Tianyu Ren, Georgia Chalvatzaki, Jan Peters

Task and Motion Planning (TAMP) provides a hierarchical framework to handle the sequential nature of manipulation tasks by interleaving a symbolic task planner that generates a possible action sequence, with a motion planner that checks the kinematic feasibility in the geometric world, generating robot trajectories if several constraints are satisfied, e. g., a collision-free trajectory from one state to another.

Motion Planning

Regularized Deep Signed Distance Fields for Reactive Motion Generation

no code implementations9 Mar 2022 Puze Liu, Kuo Zhang, Davide Tateo, Snehal Jauhri, Jan Peters, Georgia Chalvatzaki

Autonomous robots should operate in real-world dynamic environments and collaborate with humans in tight spaces.

Inductive Bias

Dimensionality Reduction and Prioritized Exploration for Policy Search

no code implementations9 Mar 2022 Marius Memmel, Puze Liu, Davide Tateo, Jan Peters

Black-box policy optimization is a class of reinforcement learning algorithms that explores and updates the policies at the parameter level.

Dimensionality Reduction

An Analysis of Measure-Valued Derivatives for Policy Gradients

no code implementations8 Mar 2022 Joao Carvalho, Jan Peters

This estimator is unbiased, has low variance, and can be used with differentiable and non-differentiable function approximators.

Robot Learning of Mobile Manipulation with Reachability Behavior Priors

no code implementations8 Mar 2022 Snehal Jauhri, Jan Peters, Georgia Chalvatzaki

Finally, we zero-transfer our learned 6D fetching policy with BHyRL to our MM robot TIAGo++.

An Adaptive Human Driver Model for Realistic Race Car Simulations

no code implementations3 Mar 2022 Stefan Löckel, Siwei Ju, Maximilian Schaller, Peter van Vliet, Jan Peters

This work contributes to a better understanding and modeling of the human driver, aiming to expedite simulation methods in the modern vehicle development process and potentially supporting automated driving and racing technologies.

Imitation Learning

Integrating Contrastive Learning with Dynamic Models for Reinforcement Learning from Images

1 code implementation2 Mar 2022 Bang You, Oleg Arenz, Youping Chen, Jan Peters

Recent methods for reinforcement learning from images use auxiliary tasks to learn image features that are used by the agent's policy or Q-function.

Contrastive Learning Data Augmentation +2

A Unified Perspective on Value Backup and Exploration in Monte-Carlo Tree Search

no code implementations11 Feb 2022 Tuan Dam, Carlo D'Eramo, Jan Peters, Joni Pajarinen

In this work, we propose two methods for improving the convergence rate and exploration based on a newly introduced backup operator and entropy regularization.

Atari Games Decision Making +1

Distilled Domain Randomization

no code implementations6 Dec 2021 Julien Brosseit, Benedikt Hahner, Fabio Muratore, Michael Gienger, Jan Peters

However, these methods are notorious for the enormous amount of required training data which is prohibitively expensive to collect on real robots.


Robot Learning from Randomized Simulations: A Review

no code implementations1 Nov 2021 Fabio Muratore, Fabio Ramos, Greg Turk, Wenhao Yu, Michael Gienger, Jan Peters

The rise of deep learning has caused a paradigm shift in robotics research, favoring methods that require large amounts of data.

Learning Stable Vector Fields on Lie Groups

no code implementations22 Oct 2021 Julen Urain, Davide Tateo, Jan Peters

Learning robot motions from demonstration requires having models that are able to represent vector fields for the full robot pose when the task is defined in operational space.

Continuous-Time Fitted Value Iteration for Robust Policies

1 code implementation5 Oct 2021 Michael Lutter, Boris Belousov, Shie Mannor, Dieter Fox, Animesh Garg, Jan Peters

Especially for continuous control, solving this differential equation and its extension the Hamilton-Jacobi-Isaacs equation, is important as it yields the optimal policy that achieves the maximum reward on a give task.

Continuous Control

Combining Physics and Deep Learning to learn Continuous-Time Dynamics Models

1 code implementation5 Oct 2021 Michael Lutter, Jan Peters

Especially for learning dynamics models, these black-box models are not desirable as the underlying principles are well understood and the standard deep networks can learn dynamics that violate these principles.

Boosted Curriculum Reinforcement Learning

no code implementations ICLR 2022 Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen

This approach, which we refer to as boosted curriculum reinforcement learning (BCRL), has the benefit of naturally increasing the representativeness of the functional space by adding a new residual each time a new task is presented.


Metrics Matter: A Closer Look on Self-Paced Reinforcement Learning

no code implementations29 Sep 2021 Pascal Klink, Haoyi Yang, Jan Peters, Joni Pajarinen

Experiments demonstrate that the resulting introduction of metric structure into the curriculum allows for a well-behaving non-parametric version of SPRL that leads to stable learning performance across tasks.


Function-Space Variational Inference for Deep Bayesian Classification

no code implementations29 Sep 2021 Jihao Andreas Lin, Joe Watson, Pascal Klink, Jan Peters

Bayesian deep learning approaches assume model parameters to be latent random variables and infer posterior predictive distributions to quantify uncertainty, increase safety and trust, and prevent overconfident and unpredictable behavior.

Adversarial Robustness Classification +2

An Empirical Analysis of Measure-Valued Derivatives for Policy Gradients

1 code implementation20 Jul 2021 João Carvalho, Davide Tateo, Fabio Muratore, Jan Peters

This estimator is unbiased, has low variance, and can be used with differentiable and non-differentiable function approximators.

Exploration via Empowerment Gain: Combining Novelty, Surprise and Learning Progress

no code implementations ICML Workshop URL 2021 Philip Becker-Ehmck, Maximilian Karl, Jan Peters, Patrick van der Smagt

We show that while such an agent is still novelty seeking, i. e. interested in exploring the whole state space, it focuses on exploration where its perceived influence is greater, avoiding areas of greater stochasticity or traps that limit its control.

Robust Value Iteration for Continuous Control Tasks

1 code implementation25 May 2021 Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg

The adversarial perturbations encourage a optimal policy that is robust to changes in the dynamics.

Continuous Control reinforcement-learning

Evolutionary Training and Abstraction Yields Algorithmic Generalization of Neural Computers

no code implementations17 May 2021 Daniel Tanneberg, Elmar Rueckert, Jan Peters

A key feature of intelligent behaviour is the ability to learn abstract strategies that scale and transfer to unfamiliar problems.

Value Iteration in Continuous Actions, States and Time

1 code implementation10 May 2021 Michael Lutter, Shie Mannor, Jan Peters, Dieter Fox, Animesh Garg

This algorithm enables dynamic programming for continuous states and actions with a known dynamics model.

Reinforcement Learning using Guided Observability

no code implementations22 Apr 2021 Stephan Weigand, Pascal Klink, Jan Peters, Joni Pajarinen

Due to recent breakthroughs, reinforcement learning (RL) has demonstrated impressive performance in challenging sequential decision-making problems.

Decision Making OpenAI Gym +1

Distributionally Robust Trajectory Optimization Under Uncertain Dynamics via Relative Entropy Trust-Regions

no code implementations29 Mar 2021 Hany Abdulsamad, Tim Dorau, Boris Belousov, Jia-Jie Zhu, Jan Peters

Trajectory optimization and model predictive control are essential techniques underpinning advanced robotic applications, ranging from autonomous driving to full-body humanoid control.

Autonomous Driving

SKID RAW: Skill Discovery from Raw Trajectories

no code implementations26 Mar 2021 Daniel Tanneberg, Kai Ploeger, Elmar Rueckert, Jan Peters

Integrating robots in complex everyday environments requires a multitude of problems to be solved.

Variational Inference

Model Predictive Actor-Critic: Accelerating Robot Skill Acquisition with Deep Reinforcement Learning

1 code implementation25 Mar 2021 Andrew S. Morgan, Daljeet Nandha, Georgia Chalvatzaki, Carlo D'Eramo, Aaron M. Dollar, Jan Peters

Substantial advancements to model-based reinforcement learning algorithms have been impeded by the model-bias induced by the collected data, which generally hurts performance.

Model-based Reinforcement Learning reinforcement-learning

Advancing Trajectory Optimization with Approximate Inference: Exploration, Covariance Control and Adaptive Risk

1 code implementation10 Mar 2021 Joe Watson, Jan Peters

Discrete-time stochastic optimal control remains a challenging problem for general, nonlinear systems under significant uncertainty, with practical solvers typically relying on the certainty equivalence assumption, replanning and/or extensive regularization.

Extended Tree Search for Robot Task and Motion Planning

1 code implementation9 Mar 2021 Tianyu Ren, Georgia Chalvatzaki, Jan Peters

Moreover, we effectively combine this skeleton space with the resultant motion variable spaces into a single extended decision space.

Decision Making Motion Planning

A Probabilistic Interpretation of Self-Paced Learning with Applications to Reinforcement Learning

1 code implementation25 Feb 2021 Pascal Klink, Hany Abdulsamad, Boris Belousov, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Across machine learning, the use of curricula has shown strong empirical potential to improve learning from data by avoiding local optima of training objectives.


Perspectives on Sim2Real Transfer for Robotics: A Summary of the R:SS 2020 Workshop

no code implementations7 Dec 2020 Sebastian Höfer, Kostas Bekris, Ankur Handa, Juan Camilo Gamboa, Florian Golemo, Melissa Mozifian, Chris Atkeson, Dieter Fox, Ken Goldberg, John Leonard, C. Karen Liu, Jan Peters, Shuran Song, Peter Welinder, Martha White

This report presents the debates, posters, and discussions of the Sim2Real workshop held in conjunction with the 2020 edition of the "Robotics: Science and System" conference.

Convex Optimization with an Interpolation-based Projection and its Application to Deep Learning

no code implementations13 Nov 2020 Riad Akrour, Asma Atamna, Jan Peters

We then propose an optimization algorithm that follows the gradient of the composition of the objective and the projection and prove its convergence for linear objectives and arbitrary convex and Lipschitz domain defining inequality constraints.

A Variational Infinite Mixture for Probabilistic Inverse Dynamics Learning

1 code implementation10 Nov 2020 Hany Abdulsamad, Peter Nickl, Pascal Klink, Jan Peters

Probabilistic regression techniques in control and robotics applications have to fulfill different criteria of data-driven adaptability, computational efficiency, scalability to high dimensions, and the capacity to deal with different modalities in the data.

Batch Reinforcement Learning with a Nonparametric Off-Policy Policy Gradient

no code implementations27 Oct 2020 Samuele Tosatto, João Carvalho, Jan Peters

Off-policy Reinforcement Learning (RL) holds the promise of better data efficiency as it allows sample reuse and potentially enables safe interaction with the environment.

Policy Gradient Methods reinforcement-learning

High Acceleration Reinforcement Learning for Real-World Juggling with Binary Rewards

no code implementations26 Oct 2020 Kai Ploeger, Michael Lutter, Jan Peters

Robots that can learn in the physical world will be important to en-able robots to escape their stiff and pre-programmed movements.


Contextual Latent-Movements Off-Policy Optimization for Robotic Manipulation Skills

no code implementations26 Oct 2020 Samuele Tosatto, Georgia Chalvatzaki, Jan Peters

Parameterized movement primitives have been extensively used for imitation learning of robotic tasks.

Imitation Learning

ImitationFlow: Learning Deep Stable Stochastic Dynamic Systems by Normalizing Flows

no code implementations25 Oct 2020 Julen Urain, Michelle Ginesi, Davide Tateo, Jan Peters

We introduce ImitationFlow, a novel Deep generative model that allows learning complex globally stable, stochastic, nonlinear dynamics.

A Differentiable Newton Euler Algorithm for Multi-body Model Learning

no code implementations19 Oct 2020 Michael Lutter, Johannes Silberbauer, Joe Watson, Jan Peters

In this work, we examine a spectrum of hybrid model for the domain of multi-body robot dynamics.

Differentiable Implicit Layers

no code implementations14 Oct 2020 Andreas Look, Simona Doneva, Melih Kandemir, Rainer Gemulla, Jan Peters

In this paper, we introduce an efficient backpropagation scheme for non-constrained implicit functions.

Active Inference or Control as Inference? A Unifying View

no code implementations1 Oct 2020 Joe Watson, Abraham Imohiosen, Jan Peters

Active inference (AI) is a persuasive theoretical framework from computational neuroscience that seeks to describe action and perception as inference-based computation.

Model-Based Quality-Diversity Search for Efficient Robot Learning

no code implementations11 Aug 2020 Leon Keller, Daniel Tanneberg, Svenja Stark, Jan Peters

One approach that was recently used to autonomously generate a repertoire of diverse skills is a novelty based Quality-Diversity~(QD) algorithm.

Multi-Sensor Next-Best-View Planning as Matroid-Constrained Submodular Maximization

no code implementations4 Jul 2020 Mikko Lauri, Joni Pajarinen, Jan Peters, Simone Frintrop

We consider the problem of creating a 3D model using depth images captured by a team of multiple robots.

Convex Regularization in Monte-Carlo Tree Search

no code implementations1 Jul 2020 Tuan Dam, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Monte-Carlo planning and Reinforcement Learning (RL) are essential to sequential decision making.

Atari Games Decision Making

Deterministic Variational Inference for Neural SDEs

no code implementations16 Jun 2020 Andreas Look, Melih Kandemir, Jan Peters

We approximate the intractable data fit term of the evidence lower bound by a novel bidimensional moment matching algorithm: vertical along the neural net layers and horizontal along the time direction.

Time Series Variational Inference

Learning to Play Table Tennis From Scratch using Muscular Robots

no code implementations10 Jun 2020 Dieter Büchler, Simon Guist, Roberto Calandra, Vincent Berenz, Bernhard Schölkopf, Jan Peters

This work is the first to (a) fail-safe learn of a safety-critical dynamic task using anthropomorphic robot arms, (b) learn a precision-demanding problem with a PAM-driven system despite the control challenges and (c) train robots to play table tennis without real balls.


Continuous Action Reinforcement Learning from a Mixture of Interpretable Experts

1 code implementation10 Jun 2020 Riad Akrour, Davide Tateo, Jan Peters

Reinforcement learning (RL) has demonstrated its ability to solve high dimensional tasks by leveraging non-linear function approximators.


Orientation Attentive Robotic Grasp Synthesis with Augmented Grasp Map Representation

1 code implementation9 Jun 2020 Georgia Chalvatzaki, Nikolaos Gkanatsios, Petros Maragos, Jan Peters

Inherent morphological characteristics in objects may offer a wide range of plausible grasping orientations that obfuscates the visual learning of robotic grasping.

Grasp Generation Robotic Grasping

Sharing Knowledge in Multi-Task Deep Reinforcement Learning

1 code implementation ICLR 2020 Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, Jan Peters

We study the benefit of sharing representations among tasks to enable the effective use of deep neural networks in Multi-Task Reinforcement Learning.


Self-Paced Deep Reinforcement Learning

1 code implementation NeurIPS 2020 Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Curriculum reinforcement learning (CRL) improves the learning speed and stability of an agent by exposing it to a tailored series of tasks throughout learning.


Deep Reinforcement Learning with Weighted Q-Learning

no code implementations20 Mar 2020 Andrea Cini, Carlo D'Eramo, Jan Peters, Cesare Alippi

In this regard, Weighted Q-Learning (WQL) effectively reduces bias and shows remarkable results in stochastic environments.

Gaussian Processes Q-Learning +2

Learning to Fly via Deep Model-Based Reinforcement Learning

1 code implementation19 Mar 2020 Philip Becker-Ehmck, Maximilian Karl, Jan Peters, Patrick van der Smagt

Learning to control robots without requiring engineered models has been a long-term goal, promising diverse and novel applications.

Model-based Reinforcement Learning reinforcement-learning

Deep Adversarial Reinforcement Learning for Object Disentangling

no code implementations8 Mar 2020 Melvin Laux, Oleg Arenz, Jan Peters, Joni Pajarinen

The ARL framework utilizes an adversary, which is trained to steer the original agent, the protagonist, to challenging states.


Data-efficient Domain Randomization with Bayesian Optimization

no code implementations5 Mar 2020 Fabio Muratore, Christian Eilers, Michael Gienger, Jan Peters

Domain randomization methods tackle this problem by randomizing the physics simulator (source domain) during training according to a distribution over domain parameters in order to obtain more robust policies that are able to overcome the reality gap.

Dimensionality Reduction of Movement Primitives in Parameter Space

no code implementations26 Feb 2020 Samuele Tosatto, Jonas Stadtmueller, Jan Peters

The empirical analysis shows that the dimensionality reduction in parameter space is more effective than in configuration space, as it enables the representation of the movements with a significant reduction of parameters.

Dimensionality Reduction

Differential Equations as a Model Prior for Deep Learning and its Applications in Robotics

no code implementations ICLR Workshop DeepDiffEq 2019 Michael Lutter, Jan Peters

Therefore, differential equations are a promising approach to incorporate prior knowledge in machine learning models to obtain robust and interpretable models.

Metric-Based Imitation Learning Between Two Dissimilar Anthropomorphic Robotic Arms

no code implementations25 Feb 2020 Marcus Ebner von Eschenbach, Binyamin Manela, Jan Peters, Armin Biess

The development of autonomous robotic systems that can learn from human demonstrations to imitate a desired behavior - rather than being manually programmed - has huge technological potential.

Imitation Learning

An Upper Bound of the Bias of Nadaraya-Watson Kernel Regression under Lipschitz Assumptions

no code implementations29 Jan 2020 Samuele Tosatto, Riad Akrour, Jan Peters

The Nadaraya-Watson kernel estimator is among the most popular nonparameteric regression technique thanks to its simplicity.

A Probabilistic Framework for Imitating Human Race Driver Behavior

no code implementations22 Jan 2020 Stefan Löckel, Jan Peters, Peter van Vliet

To approach this problem, we propose Probabilistic Modeling of Driver behavior (ProMoD), a modular framework which splits the task of driver behavior modeling into multiple modules.

Car Racing Imitation Learning

A Nonparametric Off-Policy Policy Gradient

1 code implementation8 Jan 2020 Samuele Tosatto, Joao Carvalho, Hany Abdulsamad, Jan Peters

Reinforcement learning (RL) algorithms still suffer from high sample complexity despite outstanding recent successes.

Density Estimation Policy Gradient Methods

MushroomRL: Simplifying Reinforcement Learning Research

2 code implementations4 Jan 2020 Carlo D'Eramo, Davide Tateo, Andrea Bonarini, Marcello Restelli, Jan Peters

MushroomRL is an open-source Python library developed to simplify the process of implementing and running Reinforcement Learning (RL) experiments.


Long-Term Visitation Value for Deep Exploration in Sparse Reward Reinforcement Learning

1 code implementation1 Jan 2020 Simone Parisi, Davide Tateo, Maximilian Hensel, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Empirical results on classic and novel benchmarks show that the proposed approach outperforms existing methods in environments with sparse rewards, especially in the presence of rewards that create suboptimal modes of the objective function.


Learning Human Postural Control with Hierarchical Acquisition Functions

no code implementations ICLR 2020 Nils Rottmann, Tjasa Kunavar, Jan Babic, Jan Peters, Elmar Rueckert

In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks.

Generalized Mean Estimation in Monte-Carlo Tree Search

no code implementations1 Nov 2019 Tuan Dam, Pascal Klink, Carlo D'Eramo, Jan Peters, Joni Pajarinen

Finally, we empirically demonstrate the effectiveness of our method in well-known MDP and POMDP benchmarks, showing significant improvement in performance and convergence speed w. r. t.

Learning Algorithmic Solutions to Symbolic Planning Tasks with a Neural Computer Architecture

no code implementations30 Oct 2019 Daniel Tanneberg, Elmar Rueckert, Jan Peters

A key feature of intelligent behavior is the ability to learn abstract strategies that transfer to unfamiliar problems.


Receding Horizon Curiosity

1 code implementation8 Oct 2019 Matthias Schultheis, Boris Belousov, Hany Abdulsamad, Jan Peters

Sample-efficient exploration is crucial not only for discovering rewarding experiences but also for adapting to environment changes in a task-agnostic fashion.

Efficient Exploration Experimental Design +1

Stochastic Optimal Control as Approximate Input Inference

1 code implementation Conference on Robot Learning (CoRL) 2019 2019 Joe Watson, Hany Abdulsamad, Jan Peters

Optimal control of stochastic nonlinear dynamical systems is a major challenge in the domain of robot learning.

Self-Paced Contextual Reinforcement Learning

1 code implementation7 Oct 2019 Pascal Klink, Hany Abdulsamad, Boris Belousov, Jan Peters

Generalization and adaptation of learned skills to novel situations is a core requirement for intelligent autonomous robots.


Learning Algorithmic Solutions to Symbolic Planning Tasks with a Neural Computer

no code implementations25 Sep 2019 Daniel Tanneberg, Elmar Rueckert, Jan Peters

A key feature of intelligent behavior is the ability to learn abstract strategies that transfer to unfamiliar problems.


HJB Optimal Feedback Control with Deep Differential Value Functions and Action Constraints

no code implementations13 Sep 2019 Michael Lutter, Boris Belousov, Kim Listmann, Debora Clever, Jan Peters

The corresponding optimal value function is learned end-to-end by embedding a deep differential network in the Hamilton-Jacobi-Bellmann differential equation and minimizing the error of this equality while simultaneously decreasing the discounting from short- to far-sighted to enable the learning.


Real Time Trajectory Prediction Using Deep Conditional Generative Models

1 code implementation9 Sep 2019 Sebastian Gomez-Gonzalez, Sergey Prokudin, Bernhard Scholkopf, Jan Peters

Our method uses encoder and decoder deep networks that maps complete or partial trajectories to a Gaussian distributed latent space and back, allowing for fast inference of the future values of a trajectory given previous observations.

Time Series Time Series Forecasting +1

Model-based Lookahead Reinforcement Learning

no code implementations15 Aug 2019 Zhang-Wei Hong, Joni Pajarinen, Jan Peters

Model-based Reinforcement Learning (MBRL) allows data-efficient learning which is required in real world applications such as robotics.

Continuous Control Model-based Reinforcement Learning +1

Experience Reuse with Probabilistic Movement Primitives

no code implementations11 Aug 2019 Svenja Stark, Jan Peters, Elmar Rueckert

Accordingly, for learning a new task, time could be saved by restricting the parameter search space by initializing it with the solution of a similar task.

Transfer Learning

Deep Lagrangian Networks: Using Physics as Model Prior for Deep Learning

3 code implementations ICLR 2019 Michael Lutter, Christian Ritter, Jan Peters

DeLaN can learn the equations of motion of a mechanical system (i. e., system dynamics) with a deep network efficiently while ensuring physical plausibility.

Assessing Transferability from Simulation to Reality for Reinforcement Learning

no code implementations10 Jul 2019 Fabio Muratore, Michael Gienger, Jan Peters

Optimizing a policy on a slightly faulty simulator can easily lead to the maximization of the `Simulation Optimization Bias` (SOB).


Deep Lagrangian Networks for end-to-end learning of energy-based control for under-actuated systems

1 code implementation10 Jul 2019 Michael Lutter, Kim Listmann, Jan Peters

Applying Deep Learning to control has a lot of potential for enabling the intelligent design of robot control laws.

Entropic Regularization of Markov Decision Processes

no code implementations6 Jul 2019 Boris Belousov, Jan Peters

An optimal feedback controller for a given Markov decision process (MDP) can in principle be synthesized by value or policy iteration.

Entropic Risk Measure in Policy Search

no code implementations21 Jun 2019 David Nass, Boris Belousov, Jan Peters

With the increasing pace of automation, modern robotic systems need to act in stochastic, non-stationary, partially observable environments.

Policy Gradient Methods

Switching Linear Dynamics for Variational Bayes Filtering

no code implementations29 May 2019 Philip Becker-Ehmck, Jan Peters, Patrick van der Smagt

System identification of complex and nonlinear systems is a central problem for model predictive control and model-based reinforcement learning.

Bayesian Inference Model-based Reinforcement Learning +1

Learning walk and trot from the same objective using different types of exploration

no code implementations28 Apr 2019 Zinan Liu, Kai Ploeger, Svenja Stark, Elmar Rueckert, Jan Peters

In quadruped gait learning, policy search methods that scale high dimensional continuous action spaces are commonly used.

Information Gathering in Decentralized POMDPs by Policy Graph Improvement

1 code implementation26 Feb 2019 Mikko Lauri, Joni Pajarinen, Jan Peters

Decentralized policies for information gathering are required when multiple autonomous agents are deployed to collect data about a phenomenon of interest without the ability to communicate.

Decision Making

Was ist eine Professur fuer Kuenstliche Intelligenz?

no code implementations17 Feb 2019 Kristian Kersting, Jan Peters, Constantin Rothkopf

The Federal Government of Germany aims to boost the research in the field of Artificial Intelligence (AI).

Bayesian Online Prediction of Change Points

1 code implementation12 Feb 2019 Diego Agudelo-España, Sebastian Gomez-Gonzalez, Stefan Bauer, Bernhard Schölkopf, Jan Peters

Online detection of instantaneous changes in the generative process of a data sequence generally focuses on retrospective inference of such change points without considering their future occurrences.

Bayesian Inference Change Point Detection

PIPPS: Flexible Model-Based Policy Search Robust to the Curse of Chaos

2 code implementations ICML 2018 Paavo Parmas, Carl Edward Rasmussen, Jan Peters, Kenji Doya

Previously, the exploding gradient problem has been explained to be central in deep learning and model-based reinforcement learning, because it causes numerical issues and instability in optimization.

Model-based Reinforcement Learning reinforcement-learning

TD-Regularized Actor-Critic Methods

1 code implementation19 Dec 2018 Simone Parisi, Voot Tangkaratt, Jan Peters, Mohammad Emtiyaz Khan

Actor-critic methods can achieve incredible performance on difficult reinforcement learning problems, but they are also prone to instability.


An Algorithmic Perspective on Imitation Learning

no code implementations16 Nov 2018 Takayuki Osa, Joni Pajarinen, Gerhard Neumann, J. Andrew Bagnell, Pieter Abbeel, Jan Peters

This process of learning from demonstrations, and the study of algorithms to do so, is called imitation learning.

Imitation Learning Learning Theory

Adaptation and Robust Learning of Probabilistic Movement Primitives

1 code implementation31 Aug 2018 Sebastian Gomez-Gonzalez, Gerhard Neumann, Bernhard Schölkopf, Jan Peters

However, to be able to capture variability and correlations between different joints, a probabilistic movement primitive requires the estimation of a larger number of parameters compared to their deterministic counterparts, that focus on modeling only the mean behavior.

Inverse Reinforcement Learning via Nonparametric Spatio-Temporal Subgoal Modeling

no code implementations1 Mar 2018 Adrian Šošić, Elmar Rueckert, Jan Peters, Abdelhak M. Zoubir, Heinz Koeppl

Advances in the field of inverse reinforcement learning (IRL) have led to sophisticated inference frameworks that relax the original modeling assumption of observing an agent behavior that reflects only a single intention.

Active Learning reinforcement-learning

Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks

no code implementations22 Feb 2018 Daniel Tanneberg, Jan Peters, Elmar Rueckert

By using learning signals which mimic the intrinsic motivation signalcognitive dissonance in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds.

Motion Planning

f-Divergence constrained policy improvement

1 code implementation29 Dec 2017 Boris Belousov, Jan Peters

We carry out asymptotic analysis of the solutions for different values of $\alpha$ and demonstrate the effects of using different divergence functions on a multi-armed bandit problem and on common standard reinforcement learning problems.

Local Bayesian Optimization of Motor Skills

no code implementations ICML 2017 Riad Akrour, Dmitry Sorokin, Jan Peters, Gerhard Neumann

Bayesian optimization is renowned for its sample efficiency but its application to higher dimensional tasks is impeded by its focus on global optimization.

Imitation Learning

Policy Search with High-Dimensional Context Variables

no code implementations10 Nov 2016 Voot Tangkaratt, Herke van Hoof, Simone Parisi, Gerhard Neumann, Jan Peters, Masashi Sugiyama

A naive application of unsupervised dimensionality reduction methods to the context variables, such as principal component analysis, is insufficient as task-relevant input may be ignored.

Dimensionality Reduction

Model-Free Trajectory-based Policy Optimization with Monotonic Improvement

no code implementations29 Jun 2016 Riad Akrour, Abbas Abdolmaleki, Hany Abdulsamad, Jan Peters, Gerhard Neumann

In order to show the monotonic improvement of our algorithm, we additionally conduct a theoretical analysis of our policy update scheme to derive a lower bound of the change in policy return between successive iterations.

Manifold Gaussian Processes for Regression

1 code implementation24 Feb 2014 Roberto Calandra, Jan Peters, Carl Edward Rasmussen, Marc Peter Deisenroth

This feature space is often learned in an unsupervised way, which might lead to data representations that are not useful for the overall regression task.

Gaussian Processes

Multi-Task Policy Search

no code implementations2 Jul 2013 Marc Peter Deisenroth, Peter Englert, Jan Peters, Dieter Fox

Learning policies that generalize across multiple tasks is an important and challenging research topic in reinforcement learning and robotics.

Imitation Learning reinforcement-learning

Cannot find the paper you are looking for? You can Submit a new open access paper.