Search Results for author: Michael Volpp

Found 7 papers, 5 papers with code

Latent Task-Specific Graph Network Simulators

1 code implementation9 Nov 2023 Philipp Dahlinger, Niklas Freymuth, Michael Volpp, Tai Hoang, Gerhard Neumann

Movement primitives further allow us to accommodate various types of context data, as demonstrated through the utilization of point clouds during inference.

Meta-Learning Trajectory Prediction

ProDMPs: A Unified Perspective on Dynamic and Probabilistic Movement Primitives

no code implementations4 Oct 2022 Ge Li, Zeqi Jin, Michael Volpp, Fabian Otto, Rudolf Lioutikov, Gerhard Neumann

MPs can be broadly categorized into two types: (a) dynamics-based approaches that generate smooth trajectories from any initial state, e. g., Dynamic Movement Primitives (DMPs), and (b) probabilistic approaches that capture higher-order statistics of the motion, e. g., Probabilistic Movement Primitives (ProMPs).

Numerical Integration

A Unified Perspective on Natural Gradient Variational Inference with Gaussian Mixture Models

1 code implementation23 Sep 2022 Oleg Arenz, Philipp Dahlinger, Zihan Ye, Michael Volpp, Gerhard Neumann

The two currently most effective methods for GMM-based variational inference, VIPS and iBayes-GMM, both employ independent natural gradient updates for the individual components and their weights.

Variational Inference

What Matters For Meta-Learning Vision Regression Tasks?

2 code implementations CVPR 2022 Ning Gao, Hanna Ziesche, Ngo Anh Vien, Michael Volpp, Gerhard Neumann

To this end, we (i) exhaustively evaluate common meta-learning techniques on these tasks, and (ii) quantitatively analyze the effect of various deep learning techniques commonly used in recent meta-learning algorithms in order to strengthen the generalization capability: data augmentation, domain randomization, task augmentation and meta-regularization.

Contrastive Learning Data Augmentation +4

Bayesian Context Aggregation for Neural Processes

no code implementations ICLR 2021 Michael Volpp, Fabian Flürenbrock, Lukas Grossberger, Christian Daniel, Gerhard Neumann

Recently, casting probabilistic regression as a multi-task learning problem in terms of conditional latent variable (CLV) models such as the Neural Process (NP) has shown promising results.

Bayesian Inference Multi-Task Learning +1

Trajectory-Based Off-Policy Deep Reinforcement Learning

2 code implementations14 May 2019 Andreas Doerr, Michael Volpp, Marc Toussaint, Sebastian Trimpe, Christian Daniel

Policy gradient methods are powerful reinforcement learning algorithms and have been demonstrated to solve many complex tasks.

Continuous Control Policy Gradient Methods +3

Cannot find the paper you are looking for? You can Submit a new open access paper.