1 code implementation • 3 Mar 2020 • Douglas Morrison, Peter Corke, Jürgen Leitner
We present the Evolved Grasping Analysis Dataset (EGAD), comprising over 2000 generated objects aimed at training and evaluating robotic visual grasp detection algorithms.
Robotic Grasping Robotics
1 code implementation • 20 Nov 2019 • Vibhavari Dasagi, Robert Lee, Jake Bruce, Jürgen Leitner
Deep reinforcement learning has been shown to solve challenging tasks where large amounts of training experience is available, usually obtained online while learning the task.
no code implementations • 1 Nov 2019 • Timo Korthals, Malte Schilling, Jürgen Leitner
This contribution comprises the interplay between a multi-modal variational autoencoder and an environment to a perceived environment, on which an agent can act.
no code implementations • 9 Oct 2019 • Vibhavari Dasagi, Jake Bruce, Thierry Peynot, Jürgen Leitner
When learning behavior, training data is often generated by the learner itself; this can result in unstable training dynamics, and this problem has particularly important applications in safety-sensitive real-world control tasks such as robotics.
no code implementations • ICLR 2019 • Timo Korthals, Marc Hesse, Jürgen Leitner
The application of multi-modal generative models by means of a Variational Auto Encoder (VAE) is an upcoming research topic for sensor fusion and bi-directional modality exchange.
no code implementations • 5 Nov 2018 • Jack Collins, David Howard, Jürgen Leitner
We quantify the accuracy of various simulators compared to a real world robotic reaching and interaction task.
Robotics
3 code implementations • International Conference on Robotics and Automation (ICRA) 2019 • Douglas Morrison, Peter Corke, Jürgen Leitner
Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present.
Robotics
no code implementations • 20 Sep 2018 • Vibhavari Dasagi, Robert Lee, Serena Mou, Jake Bruce, Niko Sünderhauf, Jürgen Leitner
Current end-to-end deep Reinforcement Learning (RL) approaches require jointly learning perception, decision-making and low-level control from very sparse reward signals and high-dimensional inputs, with little capability of incorporating prior knowledge.
no code implementations • 12 Sep 2018 • Timo Korthals, Jürgen Leitner, Ulrich Rückert
We investigate a reinforcement approach for distributed sensing based on the latent space derived from multi-modal deep generative models.
no code implementations • 18 Apr 2018 • Niko Sünderhauf, Oliver Brock, Walter Scheirer, Raia Hadsell, Dieter Fox, Jürgen Leitner, Ben Upcroft, Pieter Abbeel, Wolfram Burgard, Michael Milford, Peter Corke
In this paper we discuss a number of robotics-specific learning, reasoning, and embodiment challenges for deep learning.
Robotics
8 code implementations • Robotics: Science and Systems, 2018 2018 • Douglas Morrison, Peter Corke, Jürgen Leitner
This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping.
Ranked #6 on Robotic Grasping on Cornell Grasp Dataset
Robotics
1 code implementation • 18 Sep 2017 • Fangyi Zhang, Jürgen Leitner, ZongYuan Ge, Michael Milford, Peter Corke
Policies can be transferred to real environments with only 93 labelled and 186 unlabelled real images.
no code implementations • 24 May 2017 • Quentin Bateux, Eric Marchand, Jürgen Leitner, Francois Chaumette, Peter Corke
We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing.
no code implementations • 15 May 2017 • Fangyi Zhang, Jürgen Leitner, Michael Milford, Peter I. Corke
This paper introduces an end-to-end fine-tuning method to improve hand-eye coordination in modular deep visuo-motor policies (modular networks) where each module is trained independently.
no code implementations • 21 Oct 2016 • Fangyi Zhang, Jürgen Leitner, Michael Milford, Peter Corke
While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly.
1 code implementation • 17 Sep 2016 • Jürgen Leitner, Adam W. Tow, Jake E. Dean, Niko Suenderhauf, Joseph W. Durham, Matthew Cooper, Markus Eich, Christopher Lehnert, Ruben Mangels, Christopher Mccool, Peter Kujala, Lachlan Nicholson, Trung Pham, James Sergeant, Liao Wu, Fangyi Zhang, Ben Upcroft, Peter Corke
We present a new physical benchmark challenge for robotic picking: the ACRV Picking Benchmark (APB).
no code implementations • 14 Jun 2016 • Donald G. Dansereau, Anders Eriksson, Jürgen Leitner
The method deals correctly with blur caused by 6-degree-of-freedom camera motion in complex 3-D scenes, without performing depth estimation.
no code implementations • 12 Nov 2015 • Fangyi Zhang, Jürgen Leitner, Michael Milford, Ben Upcroft, Peter Corke
This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only.