Search Results for author: Maximilian Karl

Found 14 papers, 3 papers with code

On the Role of the Action Space in Robot Manipulation Learning and Sim-to-Real Transfer

no code implementations6 Dec 2023 Elie Aljalbout, Felix Frank, Maximilian Karl, Patrick van der Smagt

Our findings have important implications for the design of RL algorithms for robot manipulation tasks, and highlight the need for careful consideration of action spaces when training and transferring RL agents for real-world robotics.

Reinforcement Learning (RL) Robot Manipulation

Action Inference by Maximising Evidence: Zero-Shot Imitation from Observation with World Models

1 code implementation NeurIPS 2023 Xingyuan Zhang, Philip Becker-Ehmck, Patrick van der Smagt, Maximilian Karl

Our method is "zero-shot" in the sense that it does not require further training for the world model or online interactions with the environment after given the demonstration.

CLAS: Coordinating Multi-Robot Manipulation with Central Latent Action Spaces

no code implementations28 Nov 2022 Elie Aljalbout, Maximilian Karl, Patrick van der Smagt

Multi-robot manipulation tasks involve various control entities that can be separated into dynamically independent parts.

Robot Manipulation

Exploration via Empowerment Gain: Combining Novelty, Surprise and Learning Progress

no code implementations ICML Workshop URL 2021 Philip Becker-Ehmck, Maximilian Karl, Jan Peters, Patrick van der Smagt

We show that while such an agent is still novelty seeking, i. e. interested in exploring the whole state space, it focuses on exploration where its perceived influence is greater, avoiding areas of greater stochasticity or traps that limit its control.

Learning to Fly via Deep Model-Based Reinforcement Learning

1 code implementation19 Mar 2020 Philip Becker-Ehmck, Maximilian Karl, Jan Peters, Patrick van der Smagt

Learning to control robots without requiring engineered models has been a long-term goal, promising diverse and novel applications.

Model-based Reinforcement Learning reinforcement-learning +1

Beta DVBF: Learning State-Space Models for Control from High Dimensional Observations

no code implementations2 Nov 2019 Neha Das, Maximilian Karl, Philip Becker-Ehmck, Patrick van der Smagt

Learning a model of dynamics from high-dimensional images can be a core ingredient for success in many applications across different domains, especially in sequential decision making.

Decision Making

Unsupervised Real-Time Control through Variational Empowerment

no code implementations13 Oct 2017 Maximilian Karl, Maximilian Soelch, Philip Becker-Ehmck, Djalel Benbouzid, Patrick van der Smagt, Justin Bayer

We introduce a methodology for efficiently computing a lower bound to empowerment, allowing it to be used as an unsupervised cost function for policy learning in real-time control.

Unsupervised preprocessing for Tactile Data

no code implementations23 Jun 2016 Maximilian Karl, Justin Bayer, Patrick van der Smagt

Tactile information is important for gripping, stable grasp, and in-hand manipulation, yet the complexity of tactile data prevents widespread use of such sensors.

reinforcement-learning Reinforcement Learning (RL)

ML-based tactile sensor calibration: A universal approach

no code implementations21 Jun 2016 Maximilian Karl, Artur Lohrer, Dhananjay Shah, Frederik Diehl, Max Fiedler, Saahil Ognawala, Justin Bayer, Patrick van der Smagt

We study the responses of two tactile sensors, the fingertip sensor from the iCub and the BioTac under different external stimuli.

Deep Variational Bayes Filters: Unsupervised Learning of State Space Models from Raw Data

4 code implementations20 May 2016 Maximilian Karl, Maximilian Soelch, Justin Bayer, Patrick van der Smagt

We introduce Deep Variational Bayes Filters (DVBF), a new method for unsupervised learning and identification of latent Markovian state space models.

Variational Inference

Efficient Empowerment

no code implementations28 Sep 2015 Maximilian Karl, Justin Bayer, Patrick van der Smagt

This is a natural candidate for an intrinsic reward signal in the context of reinforcement learning: the agent will place itself in a situation where its action have maximum stability and maximum influence on the future.

Fast Adaptive Weight Noise

no code implementations19 Jul 2015 Justin Bayer, Maximilian Karl, Daniela Korhammer, Patrick van der Smagt

Marginalising out uncertain quantities within the internal representations or parameters of neural networks is of central importance for a wide range of learning techniques, such as empirical, variational or full Bayesian methods.

Gaussian Processes

Improving approximate RPCA with a k-sparsity prior

no code implementations29 Dec 2014 Maximilian Karl, Christian Osendorfer

A process centric view of robust PCA (RPCA) allows its fast approximate implementation based on a special form o a deep neural network with weights shared across all layers.

General Classification

Cannot find the paper you are looking for? You can Submit a new open access paper.