Search Results for author: Jan R. Peters

Found 11 papers, 0 papers with code

Catching heuristics are optimal control policies

no code implementations NeurIPS 2016 Boris Belousov, Gerhard Neumann, Constantin A. Rothkopf, Jan R. Peters

In this paper, we show that interception strategies appearing to be heuristics can be understood as computational solutions to the optimal control problem faced by a ball-catching agent acting under uncertainty.

Probabilistic Movement Primitives

no code implementations NeurIPS 2013 Alexandros Paraschos, Christian Daniel, Jan R. Peters, Gerhard Neumann

In order to use such a trajectory distribution for robot movement control, we analytically derive a stochastic feedback controller which reproduces the given trajectory distribution.

Algorithms for Learning Markov Field Policies

no code implementations NeurIPS 2012 Abdeslam Boularias, Jan R. Peters, Oliver B. Kroemer

We present a new graph-based approach for incorporating domain knowledge in reinforcement learning applications.

reinforcement-learning

A Non-Parametric Approach to Dynamic Programming

no code implementations NeurIPS 2011 Oliver B. Kroemer, Jan R. Peters

In this paper, we consider the problem of policy evaluation for continuous-state systems.

Density Estimation

Movement extraction by detecting dynamics switches and repetitions

no code implementations NeurIPS 2010 Silvia Chiappa, Jan R. Peters

Many time-series such as human movement data consist of a sequence of basic actions, e. g., forehands and backhands in tennis.

Time Series

Switched Latent Force Models for Movement Segmentation

no code implementations NeurIPS 2010 Mauricio Alvarez, Jan R. Peters, Neil D. Lawrence, Bernhard Schölkopf

Latent force models encode the interaction between multiple related dynamical systems in the form of a kernel or covariance function.

Fitted Q-iteration by Advantage Weighted Regression

no code implementations NeurIPS 2008 Gerhard Neumann, Jan R. Peters

Recently, fitted Q-iteration (FQI) based methods have become more popular due to their increased sample efficiency, a more stable learning process and the higher quality of the resulting policy.

Policy Search for Motor Primitives in Robotics

no code implementations NeurIPS 2008 Jens Kober, Jan R. Peters

We compare this algorithm to alternative parametrized policy search methods and show that it outperforms previous methods.

Imitation Learning Policy Gradient Methods +1

Local Gaussian Process Regression for Real Time Online Model Learning

no code implementations NeurIPS 2008 Duy Nguyen-Tuong, Jan R. Peters, Matthias Seeger

Inspired by local learning, we propose a method to speed up standard Gaussian Process regression (GPR) with local GP models (LGP).

GPR online learning

Using Bayesian Dynamical Systems for Motion Template Libraries

no code implementations NeurIPS 2008 Silvia Chiappa, Jens Kober, Jan R. Peters

Motor primitives or motion templates have become an important concept for both modeling human motor control as well as generating robot behaviors using imitation learning.

Imitation Learning Time Series

Cannot find the paper you are looking for? You can Submit a new open access paper.