no code implementations • 14 Mar 2024 • Jiajun Deng, Sha Zhang, Feras Dayoub, Wanli Ouyang, Yanyong Zhang, Ian Reid
In this work, we present PoIFusion, a simple yet effective multi-modal 3D object detection framework to fuse the information of RGB images and LiDAR point clouds at the point of interest (abbreviated as PoI).
1 code implementation • 10 Jan 2024 • Prakash Mallick, Feras Dayoub, Jamie Sherrah
We present a novel approach that effectively identifies unknown objects by distinguishing between high and low-density regions in latent space.
no code implementations • 14 Dec 2023 • Renjie Wu, Hu Wang, Feras Dayoub, Hsiang-Ting Chen
The model consists of a vision teacher utilising panoramic information, an auditory teacher with 8-channel audio, and an audio-visual student that takes views with limited FoV and binaural audio as input and produce semantic segmentation for objects outside FoV.
no code implementations • 30 Oct 2023 • Xiangyu Shi, Yanyuan Qiao, Qi Wu, Lingqiao Liu, Feras Dayoub
Effective object detection in mobile robots is challenged by deployment in diverse and unfamiliar environments.
1 code implementation • 2 May 2023 • Lachlan Holden, Feras Dayoub, David Harvey, Tat-Jun Chin
The ability of neural radiance fields or NeRFs to conduct accurate 3D modelling has motivated application of the technique to scene representation.
no code implementations • 27 Mar 2023 • David Pershouse, Feras Dayoub, Dimity Miller, Niko Sünderhauf
We address the challenging problem of open world object detection (OWOD), where object detectors must identify objects from known classes while also identifying and continually learning to detect novel objects.
1 code implementation • 13 Feb 2023 • Nicolas Harvey Chapman, Feras Dayoub, Will Browne, Christopher Lehnert
Motivated by this, we propose a framework for explicitly addressing class distribution shift to improve pseudo-label reliability in self-training.
no code implementations • 8 Nov 2022 • Jad Abou-Chakra, Feras Dayoub, Niko Sünderhauf
ParticleNeRF is the first online dynamic NeRF and achieves fast adaptability with better visual fidelity than brute-force online InstantNGP and other baseline approaches on dynamic scenes with online constraints.
1 code implementation • ICCV 2023 • Samuel Wilson, Tobias Fischer, Feras Dayoub, Dimity Miller, Niko Sünderhauf
We address the problem of out-of-distribution (OOD) detection for the task of object detection.
1 code implementation • 10 Dec 2021 • Samuel Wilson, Tobias Fischer, Niko Sünderhauf, Feras Dayoub
We introduce powerful ideas from Hyperdimensional Computing into the challenging field of Out-of-Distribution (OOD) detection.
Out-of-Distribution Detection Out of Distribution (OOD) Detection
no code implementations • 19 Aug 2021 • Quazi Marufur Rahman, Niko Sünderhauf, Peter Corke, Feras Dayoub
Semantic segmentation is an important task that helps autonomous vehicles understand their surroundings and navigate safely.
no code implementations • 24 Jul 2021 • Olga Moskvyak, Frederic Maire, Feras Dayoub, Mahsa Baktashmotlagh
To reduce the need for labeled data, we focus on a semi-supervised approach that requires only a subset of the training data to be labeled.
1 code implementation • 3 Apr 2021 • Dimity Miller, Niko Sünderhauf, Michael Milford, Feras Dayoub
We also introduce a methodology for converting existing object detection datasets into specific open-set datasets to evaluate open-set performance in object detection.
no code implementations • ICLR 2021 • Olga Moskvyak, Frederic Maire, Feras Dayoub, Mahsa Baktashmotlagh
Keypoint representations are learnt with a semantic keypoint consistency constraint that forces the keypoint detection network to learn similar features for the same keypoint across the dataset.
no code implementations • 2 Jan 2021 • Sourav Garg, Niko Sünderhauf, Feras Dayoub, Douglas Morrison, Akansel Cosgun, Gustavo Carneiro, Qi Wu, Tat-Jun Chin, Ian Reid, Stephen Gould, Peter Corke, Michael Milford
In robotics and related research fields, the study of understanding is often referred to as semantics, which dictates what does the world "mean" to a robot, and is strongly tied to the question of how to represent that meaning.
2 code implementations • 23 Dec 2020 • Haoyang Zhang, Ying Wang, Feras Dayoub, Niko Sünderhauf
In this technique report, we systematically investigate the effects of applying SWA to object detection as well as instance segmentation.
no code implementations • 16 Nov 2020 • Quazi Marufur Rahman, Niko Sünderhauf, Feras Dayoub
During deployment, an object detector is expected to operate at a similar performance level reported on its testing dataset.
1 code implementation • 18 Sep 2020 • Quazi Marufur Rahman, Niko Sünderhauf, Feras Dayoub
Performance monitoring of object detection is crucial for safety-critical applications such as autonomous vehicles that operate under varying and complex environmental conditions.
4 code implementations • CVPR 2021 • Haoyang Zhang, Ying Wang, Feras Dayoub, Niko Sünderhauf
In this paper, we propose to learn an Iou-aware Classification Score (IACS) as a joint representation of object presence confidence and localization accuracy.
Ranked #24 on Object Detection on COCO-O
no code implementations • 26 Aug 2020 • Olga Moskvyak, Frederic Maire, Feras Dayoub, Mahsa Baktashmotlagh
Learning embeddings that are invariant to the pose of the object is crucial in visual image retrieval and re-identification.
no code implementations • 3 Aug 2020 • Ben Talbot, David Hall, Haoyang Zhang, Suman Raj Bista, Rohan Smith, Feras Dayoub, Niko Sünderhauf
We introduce BenchBot, a novel software suite for benchmarking the performance of robotics research across both photorealistic 3D simulations and real robot platforms.
Robotics
1 code implementation • 6 Apr 2020 • Dimity Miller, Niko Sünderhauf, Michael Milford, Feras Dayoub
We also show that our anchored class centres achieve higher open set performance than learnt class centres, particularly on object-based datasets and large numbers of training classes.
no code implementations • 31 Jan 2020 • Ben Talbot, Feras Dayoub, Peter Corke, Gordon Wyeth
Symbolic navigation performance of humans and a robot is evaluated in a real-world built environment.
no code implementations • 16 Jan 2020 • Jesse Haviland, Feras Dayoub, Peter Corke
IBVS robustly moves the camera to a goal pose defined implicitly in terms of an image-plane feature configuration.
no code implementations • 9 Jan 2020 • Olga Moskvyak, Frederic Maire, Feras Dayoub, Mahsa Baktashmotlagh
Our method outperforms the same model without body landmarks input by 26% and 18% on the synthetic and the real datasets respectively.
no code implementations • 8 Jan 2020 • Peter Corke, Feras Dayoub, David Hall, John Skinner, Niko Sünderhauf
The computer vision and robotics research communities are each strong.
no code implementations • 19 Mar 2019 • John Skinner, David Hall, Haoyang Zhang, Feras Dayoub, Niko Sünderhauf
We introduce a new challenge for computer and robotic vision, the first ACRV Robotic Vision Challenge, Probabilistic Object Detection.
no code implementations • 15 Mar 2019 • Quazi Marufur Rahman, Niko Sünderhauf, Feras Dayoub
The proposed method raises an alarm when it discovers a failure by the object detector to detect a traffic sign.
1 code implementation • 28 Feb 2019 • Olga Moskvyak, Frederic Maire, Asia O. Armstrong, Feras Dayoub, Mahsa Baktashmotlagh
We present a novel system for visual re-identification based on unique natural markings that is robust to occlusions, viewpoint and illumination changes.
1 code implementation • 27 Nov 2018 • David Hall, Feras Dayoub, John Skinner, Haoyang Zhang, Dimity Miller, Peter Corke, Gustavo Carneiro, Anelia Angelova, Niko Sünderhauf
We introduce Probabilistic Object Detection, the task of detecting objects in images and accurately quantifying the spatial and semantic uncertainties of the detections.
no code implementations • 17 Sep 2018 • Dimity Miller, Feras Dayoub, Michael Milford, Niko Sünderhauf
There has been a recent emergence of sampling-based techniques for estimating epistemic uncertainty in deep neural networks.
no code implementations • 25 Jan 2018 • David Hall, Feras Dayoub, Tristan Perez, Chris McCool
In this work, we obviate this assumption and introduce a rapidly deployable approach able to operate on any field without any weed species assumptions prior to deployment.
no code implementations • 18 Oct 2017 • Dimity Miller, Lachlan Nicholson, Feras Dayoub, Niko Sünderhauf
Dropout Variational Inference, or Dropout Sampling, has been recently proposed as an approximation technique for Bayesian Deep Learning and evaluated for image classification and regression tasks.
no code implementations • 21 Mar 2017 • Feras Dayoub, Niko Sünderhauf, Peter Corke
We investigate different strategies for active learning with Bayesian deep neural networks.
no code implementations • 4 Feb 2017 • David Hall, Feras Dayoub, Jason Kulk, Chris McCool
This greatly limits deployability as classification systems must be retrained for any field with a different set of weed species present within them.
no code implementations • 30 Jan 2017 • Inkyu Sa, Chris Lehnert, Andrew English, Chris McCool, Feras Dayoub, Ben Upcroft, Tristan Perez
This paper presents a 3D visual detection method for the challenging task of detecting peduncles of sweet peppers (Capsicum annuum) in the field.
1 code implementation • 17 Jan 2015 • Niko Sünderhauf, Feras Dayoub, Sareh Shirazi, Ben Upcroft, Michael Milford
Computer vision datasets are very different in character to robotic camera data, real-time performance is essential, and performance priorities can be different.