no code implementations • 29 Mar 2023 • Adam K. Taras, Niko Suenderhauf, Peter Corke, Donald G. Dansereau
Vision is a popular and effective sensor for robotics from which we can derive rich information about the environment: the geometry and semantics of the scene, as well as the age, gender, identity, activity and even emotional state of humans within that scene.
no code implementations • 5 Nov 2022 • Robert Lee, Jad Abou-Chakra, Fangyi Zhang, Peter Corke
A promising alternative is to learn fabric manipulation directly from watching humans perform the task.
no code implementations • 19 Aug 2021 • Quazi Marufur Rahman, Niko Sünderhauf, Peter Corke, Feras Dayoub
Semantic segmentation is an important task that helps autonomous vehicles understand their surroundings and navigate safely.
no code implementations • 29 Mar 2021 • Dorian Tsai, Peter Corke, Thierry Peynot, Donald G. Dansereau
Curved refractive objects are common in the human environment, and have a complex visual appearance that can cause robotic vision algorithms to fail.
no code implementations • 2 Jan 2021 • Sourav Garg, Niko Sünderhauf, Feras Dayoub, Douglas Morrison, Akansel Cosgun, Gustavo Carneiro, Qi Wu, Tat-Jun Chin, Ian Reid, Stephen Gould, Peter Corke, Michael Milford
In robotics and related research fields, the study of understanding is often referred to as semantics, which dictates what does the world "mean" to a robot, and is strongly tied to the question of how to represent that meaning.
1 code implementation • 17 Oct 2020 • Jesse Haviland, Peter Corke
The elementary transform sequence (ETS) provides a universal method of describing the kinematics of any serial-link manipulator.
Robotics
1 code implementation • 17 Oct 2020 • Jesse Haviland, Peter Corke
Additionally, our controller maximises the manipulability of the robot during the trajectory, while avoiding joint position and velocity limits.
Robotics
1 code implementation • 2 Jun 2020 • Patrick Rosenberger, Akansel Cosgun, Rhys Newbury, Jun Kwan, Valerio Ortenzi, Peter Corke, Manfred Grafinger
In experiments with 13 objects, the robot was able to successfully take the object from the human in 81. 9% of the trials.
1 code implementation • 3 Mar 2020 • Douglas Morrison, Peter Corke, Jürgen Leitner
We present the Evolved Grasping Analysis Dataset (EGAD), comprising over 2000 generated objects aimed at training and evaluating robotic visual grasp detection algorithms.
Robotic Grasping Robotics
2 code implementations • 27 Feb 2020 • Jesse Haviland, Peter Corke
Resolved-rate motion control of redundant serial-link manipulators is commonly achieved using the Moore-Penrose pseudoinverse in which the norm of the control input is minimized.
Robotics
no code implementations • 31 Jan 2020 • Ben Talbot, Feras Dayoub, Peter Corke, Gordon Wyeth
Symbolic navigation performance of humans and a robot is evaluated in a real-world built environment.
no code implementations • 16 Jan 2020 • Jesse Haviland, Feras Dayoub, Peter Corke
IBVS robustly moves the camera to a goal pose defined implicitly in terms of an image-plane feature configuration.
no code implementations • 8 Jan 2020 • Peter Corke, Feras Dayoub, David Hall, John Skinner, Niko Sünderhauf
The computer vision and robotics research communities are each strong.
1 code implementation • 27 Nov 2018 • David Hall, Feras Dayoub, John Skinner, Haoyang Zhang, Dimity Miller, Peter Corke, Gustavo Carneiro, Anelia Angelova, Niko Sünderhauf
We introduce Probabilistic Object Detection, the task of detecting objects in images and accurately quantifying the spatial and semantic uncertainties of the detections.
3 code implementations • International Conference on Robotics and Automation (ICRA) 2019 • Douglas Morrison, Peter Corke, Jürgen Leitner
Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present.
Robotics
no code implementations • 18 Apr 2018 • Niko Sünderhauf, Oliver Brock, Walter Scheirer, Raia Hadsell, Dieter Fox, Jürgen Leitner, Ben Upcroft, Pieter Abbeel, Wolfram Burgard, Michael Milford, Peter Corke
In this paper we discuss a number of robotics-specific learning, reasoning, and embodiment challenges for deep learning.
Robotics
8 code implementations • Robotics: Science and Systems, 2018 2018 • Douglas Morrison, Peter Corke, Jürgen Leitner
This paper presents a real-time, object-independent grasp synthesis method which can be used for closed-loop grasping.
Ranked #6 on Robotic Grasping on Cornell Grasp Dataset
Robotics
1 code implementation • 18 Sep 2017 • Fangyi Zhang, Jürgen Leitner, ZongYuan Ge, Michael Milford, Peter Corke
Policies can be transferred to real environments with only 93 labelled and 186 unlabelled real images.
no code implementations • 24 May 2017 • Quentin Bateux, Eric Marchand, Jürgen Leitner, Francois Chaumette, Peter Corke
We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing.
no code implementations • 21 Mar 2017 • Feras Dayoub, Niko Sünderhauf, Peter Corke
We investigate different strategies for active learning with Bayesian deep neural networks.
no code implementations • 16 Dec 2016 • Dorian Tsai, Donald G. Dansereau, Steve Martin, Peter Corke
This paper proposes the design of a custom mirror-based light field camera adapter that is cheap, simple in construction, and accessible.
no code implementations • 21 Oct 2016 • Fangyi Zhang, Jürgen Leitner, Michael Milford, Peter Corke
While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly.
1 code implementation • 17 Sep 2016 • Jürgen Leitner, Adam W. Tow, Jake E. Dean, Niko Suenderhauf, Joseph W. Durham, Matthew Cooper, Markus Eich, Christopher Lehnert, Ruben Mangels, Christopher Mccool, Peter Kujala, Lachlan Nicholson, Trung Pham, James Sergeant, Liao Wu, Fangyi Zhang, Ben Upcroft, Peter Corke
We present a new physical benchmark challenge for robotic picking: the ACRV Picking Benchmark (APB).
no code implementations • 1 Aug 2016 • ZongYuan Ge, Chris McCool, Conrad Sanderson, Peng Wang, Lingqiao Liu, Ian Reid, Peter Corke
Fine-grained classification is a relatively new field that has concentrated on using information from a single image, while ignoring the enormous potential of using video data to improve classification.
no code implementations • 30 Nov 2015 • ZongYuan Ge, Alex Bewley, Christopher Mccool, Ben Upcroft, Peter Corke, Conrad Sanderson
We present a novel deep convolutional neural network (DCNN) system for fine-grained image classification, called a mixture of DCNNs (MixDCNN).
no code implementations • 12 Nov 2015 • Fangyi Zhang, Jürgen Leitner, Michael Milford, Ben Upcroft, Peter Corke
This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only.
no code implementations • 9 May 2015 • Zongyuan Ge, Christopher Mccool, Conrad Sanderson, Peter Corke
Fine-grained categorisation has been a challenging problem due to small inter-class variation, large intra-class variation and low number of training images.
no code implementations • 27 Feb 2015 • ZongYuan Ge, Chris McCool, Conrad Sanderson, Peter Corke
We propose a local modelling approach using deep convolutional neural networks (CNNs) for fine-grained image classification.