no code implementations • 25 Aug 2023 • Jishnu Mukhoti, Yarin Gal, Philip H. S. Torr, Puneet K. Dokania
This is an undesirable effect of fine-tuning as a substantial amount of resources was used to learn these pre-trained concepts in the first place.
no code implementations • CVPR 2023 • Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip H.S. Torr, Yarin Gal
Reliable uncertainty from deterministic single-forward pass models is sought after because conventional methods of uncertainty quantification are computationally expensive.
no code implementations • CVPR 2023 • Jishnu Mukhoti, Tsung-Yu Lin, Omid Poursaeed, Rui Wang, Ashish Shah, Philip H. S. Torr, Ser-Nam Lim
We introduce Patch Aligned Contrastive Learning (PACL), a modified compatibility function for CLIP's contrastive loss, intending to train an alignment between the patch tokens of the vision encoder and the CLS token of the text encoder.
no code implementations • 24 Sep 2022 • Jishnu Mukhoti, Tsung-Yu Lin, Bor-Chun Chen, Ashish Shah, Philip H. S. Torr, Puneet K. Dokania, Ser-Nam Lim
In this paper, we define 2 categories of OoD data using the subtly different concepts of perceptual/visual and semantic similarity to in-distribution (iD) data.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +2
no code implementations • 29 Oct 2021 • Jishnu Mukhoti, Joost van Amersfoort, Philip H. S. Torr, Yarin Gal
We extend Deep Deterministic Uncertainty (DDU), a method for uncertainty estimation using feature space densities, to semantic segmentation.
4 code implementations • 23 Feb 2021 • Jishnu Mukhoti, Andreas Kirsch, Joost van Amersfoort, Philip H. S. Torr, Yarin Gal
Reliable uncertainty from deterministic single-forward pass models is sought after because conventional methods of uncertainty quantification are computationally expensive.
no code implementations • pproximateinference AABI Symposium 2021 • Jishnu Mukhoti, Puneet K. Dokania, Philip H. S. Torr, Yarin Gal
We study batch normalisation in the context of variational inference methods in Bayesian neural networks, such as mean-field or MC Dropout.
2 code implementations • NeurIPS 2020 • Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip H. S. Torr, Puneet K. Dokania
To facilitate the use of focal loss in practice, we also provide a principled approach to automatically select the hyperparameter involved in the loss function.
no code implementations • 25 Sep 2019 • Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip Torr, Puneet Dokania
When combined with temperature scaling, focal loss, whilst preserving accuracy and yielding state-of-the-art calibrated models, also preserves the confidence of the model's correct predictions, which is extremely desirable for downstream tasks.
no code implementations • 20 Jun 2019 • Tommaso Cavallari, Luca Bertinetto, Jishnu Mukhoti, Philip Torr, Stuart Golodetz
Many applications require a camera to be relocalised online, without expensive offline training on the target scene.
1 code implementation • 30 Nov 2018 • Jishnu Mukhoti, Yarin Gal
Deep learning has been revolutionary for computer vision and semantic segmentation in particular, with Bayesian Deep Learning (BDL) used to obtain uncertainty maps from deep models when predicting semantic classes.
Ranked #8 on Anomaly Detection on Fishyscapes
1 code implementation • 23 Nov 2018 • Jishnu Mukhoti, Pontus Stenetorp, Yarin Gal
Like all sub-fields of machine learning Bayesian Deep Learning is driven by empirical validation of its theoretical proposals.