Search Results for author: Peter Corke

Found 28 papers, 10 papers with code

The Need for Inherently Privacy-Preserving Vision in Trustworthy Autonomous Systems

no code implementations29 Mar 2023 Adam K. Taras, Niko Suenderhauf, Peter Corke, Donald G. Dansereau

Vision is a popular and effective sensor for robotics from which we can derive rich information about the environment: the geometry and semantics of the scene, as well as the age, gender, identity, activity and even emotional state of humans within that scene.

Privacy Preserving

Learning Fabric Manipulation in the Real World with Human Videos

no code implementations5 Nov 2022 Robert Lee, Jad Abou-Chakra, Fangyi Zhang, Peter Corke

A promising alternative is to learn fabric manipulation directly from watching humans perform the task.

FSNet: A Failure Detection Framework for Semantic Segmentation

no code implementations19 Aug 2021 Quazi Marufur Rahman, Niko Sünderhauf, Peter Corke, Feras Dayoub

Semantic segmentation is an important task that helps autonomous vehicles understand their surroundings and navigate safely.

Autonomous Vehicles Navigate +2

Refractive Light-Field Features for Curved Transparent Objects in Structure from Motion

no code implementations29 Mar 2021 Dorian Tsai, Peter Corke, Thierry Peynot, Donald G. Dansereau

Curved refractive objects are common in the human environment, and have a complex visual appearance that can cause robotic vision algorithms to fail.

Transparent objects

Semantics for Robotic Mapping, Perception and Interaction: A Survey

no code implementations2 Jan 2021 Sourav Garg, Niko Sünderhauf, Feras Dayoub, Douglas Morrison, Akansel Cosgun, Gustavo Carneiro, Qi Wu, Tat-Jun Chin, Ian Reid, Stephen Gould, Peter Corke, Michael Milford

In robotics and related research fields, the study of understanding is often referred to as semantics, which dictates what does the world "mean" to a robot, and is strongly tied to the question of how to represent that meaning.

Autonomous Driving Navigate

A Systematic Approach to Computing the Manipulator Jacobian and Hessian using the Elementary Transform Sequence

1 code implementation17 Oct 2020 Jesse Haviland, Peter Corke

The elementary transform sequence (ETS) provides a universal method of describing the kinematics of any serial-link manipulator.

Robotics

NEO: A Novel Expeditious Optimisation Algorithm for Reactive Motion Control of Manipulators

1 code implementation17 Oct 2020 Jesse Haviland, Peter Corke

Additionally, our controller maximises the manipulability of the robot during the trajectory, while avoiding joint position and velocity limits.

Robotics

Object-Independent Human-to-Robot Handovers using Real Time Robotic Vision

1 code implementation2 Jun 2020 Patrick Rosenberger, Akansel Cosgun, Rhys Newbury, Jun Kwan, Valerio Ortenzi, Peter Corke, Manfred Grafinger

In experiments with 13 objects, the robot was able to successfully take the object from the human in 81. 9% of the trials.

Object Segmentation

EGAD! an Evolved Grasping Analysis Dataset for diversity and reproducibility in robotic manipulation

1 code implementation3 Mar 2020 Douglas Morrison, Peter Corke, Jürgen Leitner

We present the Evolved Grasping Analysis Dataset (EGAD), comprising over 2000 generated objects aimed at training and evaluating robotic visual grasp detection algorithms.

Robotic Grasping Robotics

Maximising Manipulability During Resolved-Rate Motion Control

2 code implementations27 Feb 2020 Jesse Haviland, Peter Corke

Resolved-rate motion control of redundant serial-link manipulators is commonly achieved using the Moore-Penrose pseudoinverse in which the norm of the control input is minimized.

Robotics

Robot Navigation in Unseen Spaces using an Abstract Map

no code implementations31 Jan 2020 Ben Talbot, Feras Dayoub, Peter Corke, Gordon Wyeth

Symbolic navigation performance of humans and a robot is evaluated in a real-world built environment.

Navigate Robot Navigation

Control of the Final-Phase of Closed-Loop Visual Grasping using Image-Based Visual Servoing

no code implementations16 Jan 2020 Jesse Haviland, Feras Dayoub, Peter Corke

IBVS robustly moves the camera to a goal pose defined implicitly in terms of an image-plane feature configuration.

Object Robotic Grasping +1

Probabilistic Object Detection: Definition and Evaluation

1 code implementation27 Nov 2018 David Hall, Feras Dayoub, John Skinner, Haoyang Zhang, Dimity Miller, Peter Corke, Gustavo Carneiro, Anelia Angelova, Niko Sünderhauf

We introduce Probabilistic Object Detection, the task of detecting objects in images and accurately quantifying the spatial and semantic uncertainties of the detections.

Object object-detection +1

Multi-View Picking: Next-best-view Reaching for Improved Grasping in Clutter

3 code implementations International Conference on Robotics and Automation (ICRA) 2019 Douglas Morrison, Peter Corke, Jürgen Leitner

Camera viewpoint selection is an important aspect of visual grasp detection, especially in clutter where many occlusions are present.

Robotics

The Limits and Potentials of Deep Learning for Robotics

no code implementations18 Apr 2018 Niko Sünderhauf, Oliver Brock, Walter Scheirer, Raia Hadsell, Dieter Fox, Jürgen Leitner, Ben Upcroft, Pieter Abbeel, Wolfram Burgard, Michael Milford, Peter Corke

In this paper we discuss a number of robotics-specific learning, reasoning, and embodiment challenges for deep learning.

Robotics

Adversarial Discriminative Sim-to-real Transfer of Visuo-motor Policies

1 code implementation18 Sep 2017 Fangyi Zhang, Jürgen Leitner, ZongYuan Ge, Michael Milford, Peter Corke

Policies can be transferred to real environments with only 93 labelled and 186 unlabelled real images.

Visual Servoing from Deep Neural Networks

no code implementations24 May 2017 Quentin Bateux, Eric Marchand, Jürgen Leitner, Francois Chaumette, Peter Corke

We present a deep neural network-based method to perform high-precision, robust and real-time 6 DOF visual servoing.

Episode-Based Active Learning with Bayesian Neural Networks

no code implementations21 Mar 2017 Feras Dayoub, Niko Sünderhauf, Peter Corke

We investigate different strategies for active learning with Bayesian deep neural networks.

Active Learning

Mirrored Light Field Video Camera Adapter

no code implementations16 Dec 2016 Dorian Tsai, Donald G. Dansereau, Steve Martin, Peter Corke

This paper proposes the design of a custom mirror-based light field camera adapter that is cheap, simple in construction, and accessible.

Modular Deep Q Networks for Sim-to-real Transfer of Visuo-motor Policies

no code implementations21 Oct 2016 Fangyi Zhang, Jürgen Leitner, Michael Milford, Peter Corke

While deep learning has had significant successes in computer vision thanks to the abundance of visual data, collecting sufficiently large real-world datasets for robot learning can be costly.

Exploiting Temporal Information for DCNN-based Fine-Grained Object Classification

no code implementations1 Aug 2016 ZongYuan Ge, Chris McCool, Conrad Sanderson, Peng Wang, Lingqiao Liu, Ian Reid, Peter Corke

Fine-grained classification is a relatively new field that has concentrated on using information from a single image, while ignoring the enormous potential of using video data to improve classification.

Classification General Classification

Fine-Grained Classification via Mixture of Deep Convolutional Neural Networks

no code implementations30 Nov 2015 ZongYuan Ge, Alex Bewley, Christopher Mccool, Ben Upcroft, Peter Corke, Conrad Sanderson

We present a novel deep convolutional neural network (DCNN) system for fine-grained image classification, called a mixture of DCNNs (MixDCNN).

Classification Fine-Grained Image Classification +1

Subset Feature Learning for Fine-Grained Category Classification

no code implementations9 May 2015 Zongyuan Ge, Christopher Mccool, Conrad Sanderson, Peter Corke

Fine-grained categorisation has been a challenging problem due to small inter-class variation, large intra-class variation and low number of training images.

Classification General Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.