Search Results for author: Elmar Rueckert

Found 15 papers, 2 papers with code

CR-VAE: Contrastive Regularization on Variational Autoencoders for Preventing Posterior Collapse

1 code implementation6 Sep 2023 Fotios Lygerakis, Elmar Rueckert

The Variational Autoencoder (VAE) is known to suffer from the phenomenon of \textit{posterior collapse}, where the latent representations generated by the model become independent of the inputs.

Using Probabilistic Movement Primitives in Analyzing Human Motion Difference under Transcranial Current Stimulation

no code implementations5 Jul 2021 Honghu Xue, Rebecca Herzog, Till M Berger, Tobias Bäumer, Anne Weissbach, Elmar Rueckert

The benefit of ProMPs is that the features are directly learned from the data and ProMPs can capture important features describing the trajectory shape, which can easily be extended to other tasks.

Evolutionary Training and Abstraction Yields Algorithmic Generalization of Neural Computers

no code implementations17 May 2021 Daniel Tanneberg, Elmar Rueckert, Jan Peters

A key feature of intelligent behaviour is the ability to learn abstract strategies that scale and transfer to unfamiliar problems.

SKID RAW: Skill Discovery from Raw Trajectories

no code implementations26 Mar 2021 Daniel Tanneberg, Kai Ploeger, Elmar Rueckert, Jan Peters

Integrating robots in complex everyday environments requires a multitude of problems to be solved.

Variational Inference

Learning Human Postural Control with Hierarchical Acquisition Functions

no code implementations ICLR 2020 Nils Rottmann, Tjasa Kunavar, Jan Babic, Jan Peters, Elmar Rueckert

In order to reach similar performance, we developed a hierarchical Bayesian optimization algorithm that replicates the cognitive inference and memorization process for avoiding failures in motor control tasks.

Bayesian Optimization Memorization

Learning Algorithmic Solutions to Symbolic Planning Tasks with a Neural Computer

no code implementations25 Sep 2019 Daniel Tanneberg, Elmar Rueckert, Jan Peters

A key feature of intelligent behavior is the ability to learn abstract strategies that transfer to unfamiliar problems.

reinforcement-learning Reinforcement Learning (RL)

Experience Reuse with Probabilistic Movement Primitives

no code implementations11 Aug 2019 Svenja Stark, Jan Peters, Elmar Rueckert

Accordingly, for learning a new task, time could be saved by restricting the parameter search space by initializing it with the solution of a similar task.

Transfer Learning

Learning walk and trot from the same objective using different types of exploration

no code implementations28 Apr 2019 Zinan Liu, Kai Ploeger, Svenja Stark, Elmar Rueckert, Jan Peters

In quadruped gait learning, policy search methods that scale high dimensional continuous action spaces are commonly used.

Inverse Reinforcement Learning via Nonparametric Spatio-Temporal Subgoal Modeling

no code implementations1 Mar 2018 Adrian Šošić, Elmar Rueckert, Jan Peters, Abdelhak M. Zoubir, Heinz Koeppl

Advances in the field of inverse reinforcement learning (IRL) have led to sophisticated inference frameworks that relax the original modeling assumption of observing an agent behavior that reflects only a single intention.

Active Learning reinforcement-learning +1

Intrinsic Motivation and Mental Replay enable Efficient Online Adaptation in Stochastic Recurrent Networks

no code implementations22 Feb 2018 Daniel Tanneberg, Jan Peters, Elmar Rueckert

By using learning signals which mimic the intrinsic motivation signalcognitive dissonance in addition with a mental replay strategy to intensify experiences, the stochastic recurrent network can learn from few physical interactions and adapts to novel environments in seconds.

Motion Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.