Search Results for author: Jürgen Leitner

Found 18 papers, 6 papers with code

EGAD! an Evolved Grasping Analysis Dataset for diversity and reproducibility in robotic manipulation

1 code implementation3 Mar 2020 Douglas Morrison, Peter Corke, Jürgen Leitner

We present the Evolved Grasping Analysis Dataset (EGAD), comprising over 2000 generated objects aimed at training and evaluating robotic visual grasp detection algorithms.

Robotic Grasping Robotics

Evaluating task-agnostic exploration for fixed-batch learning of arbitrary future tasks

1 code implementation20 Nov 2019 Vibhavari Dasagi, Robert Lee, Jake Bruce, Jürgen Leitner

Deep reinforcement learning has been shown to solve challenging tasks where large amounts of training experience is available, usually obtained online while learning the task.

continuous-control Continuous Control +1

A Perceived Environment Design using a Multi-Modal Variational Autoencoder for learning Active-Sensing

no code implementations1 Nov 2019 Timo Korthals, Malte Schilling, Jürgen Leitner

This contribution comprises the interplay between a multi-modal variational autoencoder and an environment to a perceived environment, on which an agent can act.

Ctrl-Z: Recovering from Instability in Reinforcement Learning

no code implementations9 Oct 2019 Vibhavari Dasagi, Jake Bruce, Thierry Peynot, Jürgen Leitner

When learning behavior, training data is often generated by the learner itself; this can result in unstable training dynamics, and this problem has particularly important applications in safety-sensitive real-world control tasks such as robotics.

continuous-control Continuous Control +4

Deep Generative Models for learning Coherent Latent Representations from Multi-Modal Data

no code implementations ICLR 2019 Timo Korthals, Marc Hesse, Jürgen Leitner

The application of multi-modal generative models by means of a Variational Auto Encoder (VAE) is an upcoming research topic for sensor fusion and bi-directional modality exchange.

Sensor Fusion

Quantifying the Reality Gap in Robotic Manipulation Tasks

no code implementations5 Nov 2018 Jack Collins, David Howard, Jürgen Leitner

We quantify the accuracy of various simulators compared to a real world robotic reaching and interaction task.

Robotics

Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter

3 code implementations International Conference on Robotics and Automation (ICRA) 2019 Douglas Morrison, Peter Corke, Jürgen Leitner

Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present.

Robotics

Sim-to-Real Transfer of Robot Learning with Variable Length Inputs

no code implementations20 Sep 2018 Vibhavari Dasagi, Robert Lee, Serena Mou, Jake Bruce, Niko Sünderhauf, Jürgen Leitner

Current end-to-end deep Reinforcement Learning (RL) approaches require jointly learning perception, decision-making and low-level control from very sparse reward signals and high-dimensional inputs, with little capability of incorporating prior knowledge.

Decision Making Deep Reinforcement Learning +5

Coordinated Heterogeneous Distributed Perception based on Latent Space Representation

no code implementations12 Sep 2018 Timo Korthals, Jürgen Leitner, Ulrich Rückert

We investigate a reinforcement approach for distributed sensing based on the latent space derived from multi-modal deep generative models.

Sensor Fusion

The Limits and Potentials of Deep Learning for Robotics

no code implementations18 Apr 2018 Niko Sünderhauf, Oliver Brock, Walter Scheirer, Raia Hadsell, Dieter Fox, Jürgen Leitner, Ben Upcroft, Pieter Abbeel, Wolfram Burgard, Michael Milford, Peter Corke

In this paper we discuss a number of robotics-specific learning, reasoning, and embodiment challenges for deep learning.

Robotics

Adversarial Discriminative Sim-to-real Transfer of Visuo-motor Policies

1 code implementation18 Sep 2017 Fangyi Zhang, Jürgen Leitner, ZongYuan Ge, Michael Milford, Peter Corke

Policies can be transferred to real environments with only 93 labelled and 186 unlabelled real images.

Visual Servoing from Deep Neural Networks

no code implementations24 May 2017 Quentin Bateux, Eric Marchand, Jürgen Leitner, Francois Chaumette, Peter Corke

We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing.

Tuning Modular Networks with Weighted Losses for Hand-Eye Coordination

no code implementations15 May 2017 Fangyi Zhang, Jürgen Leitner, Michael Milford, Peter I. Corke

This paper introduces an end-to-end fine-tuning method to improve hand-eye coordination in modular deep visuo-motor policies (modular networks) where each module is trained independently.

Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies

no code implementations21 Oct 2016 Fangyi Zhang, Jürgen Leitner, Michael Milford, Peter Corke

While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly.

Deep Reinforcement Learning

Richardson-Lucy Deblurring for Moving Light Field Cameras

no code implementations14 Jun 2016 Donald G. Dansereau, Anders Eriksson, Jürgen Leitner

The method deals correctly with blur caused by 6-degree-of-freedom camera motion in complex 3-D scenes, without performing depth estimation.

Deblurring Depth Estimation

Cannot find the paper you are looking for? You can Submit a new open access paper.