1 code implementation • 7 May 2024 • Kealan Dunnett, Reza Arablouei, Dimity Miller, Volkan Dedeoglu, Raja Jurdak
In the era of increasing concerns over cybersecurity threats, defending against backdoor attacks is paramount in ensuring the integrity and reliability of machine learning models.
1 code implementation • 25 Mar 2024 • Dimity Miller, Niko Sünderhauf, Alex Kenna, Keita Mason
Are vision-language models (VLMs) for open-vocabulary perception inherently open-set models because they are trained on internet-scale datasets?
no code implementations • 27 Mar 2023 • David Pershouse, Feras Dayoub, Dimity Miller, Niko Sünderhauf
We address the challenging problem of open world object detection (OWOD), where object detectors must identify objects from known classes while also identifying and continually learning to detect novel objects.
1 code implementation • 4 Oct 2022 • Keita Mason, Joshua Knights, Milad Ramezani, Peyman Moghadam, Dimity Miller
State-of-the-art lidar place recognition models exhibit unreliable performance when tested on environments different from their training dataset, which limits their use in complex and evolving environments.
no code implementations • 19 Sep 2022 • Niko Sünderhauf, Jad Abou-Chakra, Dimity Miller
We show that ensembling effectively quantifies model uncertainty in Neural Radiance Fields (NeRFs) if a density-aware epistemic uncertainty term is considered.
1 code implementation • ICCV 2023 • Samuel Wilson, Tobias Fischer, Feras Dayoub, Dimity Miller, Niko Sünderhauf
We address the problem of out-of-distribution (OOD) detection for the task of object detection.
1 code implementation • 5 Jun 2022 • David Lovell, Dimity Miller, Jaiden Capra, Andrew Bradley
There are strong incentives to build models that demonstrate outstanding predictive performance on various datasets and benchmarks.
1 code implementation • 15 Mar 2022 • Dimity Miller, Peyman Moghadam, Mark Cox, Matt Wildie, Raja Jurdak
Using this framework, we investigate why Faster R-CNN and RetinaNet fail to detect objects in benchmark vision datasets and robotics datasets.
1 code implementation • 3 Apr 2021 • Dimity Miller, Niko Sünderhauf, Michael Milford, Feras Dayoub
We also introduce a methodology for converting existing object detection datasets into specific open-set datasets to evaluate open-set performance in object detection.
1 code implementation • 6 Apr 2020 • Dimity Miller, Niko Sünderhauf, Michael Milford, Feras Dayoub
We also show that our anchored class centres achieve higher open set performance than learnt class centres, particularly on object-based datasets and large numbers of training classes.
1 code implementation • 27 Nov 2018 • David Hall, Feras Dayoub, John Skinner, Haoyang Zhang, Dimity Miller, Peter Corke, Gustavo Carneiro, Anelia Angelova, Niko Sünderhauf
We introduce Probabilistic Object Detection, the task of detecting objects in images and accurately quantifying the spatial and semantic uncertainties of the detections.
no code implementations • 17 Sep 2018 • Dimity Miller, Feras Dayoub, Michael Milford, Niko Sünderhauf
There has been a recent emergence of sampling-based techniques for estimating epistemic uncertainty in deep neural networks.
no code implementations • 18 Oct 2017 • Dimity Miller, Lachlan Nicholson, Feras Dayoub, Niko Sünderhauf
Dropout Variational Inference, or Dropout Sampling, has been recently proposed as an approximation technique for Bayesian Deep Learning and evaluated for image classification and regression tasks.