3 code implementations • ECCV 2020 • Ameya Prabhu, Philip H. S. Torr, Puneet K. Dokania
We discuss a general formulation for the Continual Learning (CL) problem for classification---a learning task where a stream provides samples to a learner and the goal of the learner, depending on the samples it receives, is to continually upgrade its knowledge about the old classes and learn new ones.
no code implementations • 14 Jul 2024 • Samyak Jain, Ekdeep Singh Lubana, Kemal Oksuz, Tom Joy, Philip H. S. Torr, Amartya Sanyal, Puneet K. Dokania
Safety fine-tuning helps align Large Language Models (LLMs) with human preferences for their safe deployment.
1 code implementation • 30 May 2024 • Selim Kuzucu, Kemal Oksuz, Jonathan Sadeghi, Puneet K. Dokania
To illustrate, D-DETR with our post-hoc Isotonic Regression calibrator outperforms the recent train-time state-of-the-art calibration method Cal-DETR by more than 7 D-ECE on the COCO dataset.
1 code implementation • 26 Feb 2024 • Pau de Jorge, Riccardo Volpi, Puneet K. Dokania, Philip H. S. Torr, Gregory Rogez
For example, we utilize it to augment Cityscapes samples by incorporating a subset of Pascal classes and demonstrate that models trained on such data achieve comparable performance to the Pascal-trained baseline.
1 code implementation • 13 Feb 2024 • Ameya Prabhu, Shiven Sinha, Ponnurangam Kumaraguru, Philip H. S. Torr, Ozan Sener, Puneet K. Dokania
However, little attention has been paid to the efficacy of continually learned representations, as representations are learned alongside classifiers throughout the learning process.
1 code implementation • 20 Oct 2023 • Francisco Eiras, Kemal Oksuz, Adel Bibi, Philip H. S. Torr, Puneet K. Dokania
Referring Image Segmentation (RIS) - the problem of identifying objects in images through natural language sentences - is a challenging task currently mostly solved through supervised learning.
1 code implementation • 26 Sep 2023 • Kemal Oksuz, Selim Kuzucu, Tom Joy, Puneet K. Dokania
Combining the strengths of many existing predictors to obtain a Mixture of Experts which is superior to its individual components is an effective way to improve the performance without having to develop new architectures or train a model from scratch.
Ranked #1 on Object Detection In Aerial Images on DOTA (using extra training data)
1 code implementation • 25 Aug 2023 • Jishnu Mukhoti, Yarin Gal, Philip H. S. Torr, Puneet K. Dokania
This is an undesirable effect of fine-tuning as a substantial amount of resources was used to learn these pre-trained concepts in the first place.
1 code implementation • CVPR 2023 • Kemal Oksuz, Tom Joy, Puneet K. Dokania
The current approach for testing the robustness of object detectors suffers from serious deficiencies such as improper methods of performing out-of-distribution detection and using calibration metrics which do not consider both localisation and classification quality.
2 code implementations • 27 May 2023 • Liheng Ma, Chen Lin, Derek Lim, Adriana Romero-Soriano, Puneet K. Dokania, Mark Coates, Philip Torr, Ser-Nam Lim
Graph inductive biases are crucial for Graph Transformers, and previous works incorporate them using message-passing modules and/or positional encodings.
Ranked #1 on Node Classification on CLUSTER
no code implementations • 24 Sep 2022 • Jishnu Mukhoti, Tsung-Yu Lin, Bor-Chun Chen, Ashish Shah, Philip H. S. Torr, Puneet K. Dokania, Ser-Nam Lim
In this paper, we define 2 categories of OoD data using the subtly different concepts of perceptual/visual and semantic similarity to in-distribution (iD) data.
Out-of-Distribution Detection Out of Distribution (OOD) Detection +2
1 code implementation • 23 Sep 2022 • Edward Ayers, Jonathan Sadeghi, John Redford, Romain Mueller, Puneet K. Dokania
There is a longstanding interest in capturing the error behaviour of object detectors by finding images where their performance is likely to be unsatisfactory.
no code implementations • 22 Jul 2022 • Francesco Pinto, Philip H. S. Torr, Puneet K. Dokania
Following the surge of popularity of Transformers in Computer Vision, several studies have attempted to determine whether they could be more robust to distribution shifts and provide better uncertainty estimates than Convolutional Neural Networks (CNNs).
1 code implementation • 13 Jul 2022 • Tom Joy, Francesco Pinto, Ser-Nam Lim, Philip H. S. Torr, Puneet K. Dokania
The most common post-hoc approach to compensate for this is to perform temperature scaling, which adjusts the confidences of the predictions on any input by scaling the logits by a fixed value.
2 code implementations • 29 Jun 2022 • Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip H. S. Torr, Puneet K. Dokania
We show that the effectiveness of the well celebrated Mixup [Zhang et al., 2018] can be further improved if instead of using it as the sole learning objective, it is utilized as an additional regularizer to the standard cross-entropy loss.
1 code implementation • 16 Jun 2022 • Guillermo Ortiz-Jiménez, Pau de Jorge, Amartya Sanyal, Adel Bibi, Puneet K. Dokania, Pascal Frossard, Gregory Rogéz, Philip H. S. Torr
Through extensive experiments we analyze this novel phenomenon and discover that the presence of these easy features induces a learning shortcut that leads to CO. Our findings provide new insights into the mechanisms of CO and improve our understanding of the dynamics of AT.
1 code implementation • 2 Feb 2022 • Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip H. S. Torr, Grégory Rogez, Puneet K. Dokania
Recently, Wong et al. showed that adversarial training with single-step FGSM leads to a characteristic failure mode named Catastrophic Overfitting (CO), in which a model becomes suddenly vulnerable to multi-step attacks.
no code implementations • 29 Sep 2021 • Francesco Pinto, Harry Yang, Ser-Nam Lim, Philip Torr, Puneet K. Dokania
We propose an extremely simple approach to regularize a single deterministic neural network to obtain improved accuracy and reliable uncertainty estimates.
no code implementations • 29 Sep 2021 • Pau de Jorge, Adel Bibi, Riccardo Volpi, Amartya Sanyal, Philip Torr, Grégory Rogez, Puneet K. Dokania
In this work, we methodically revisit the role of noise and clipping in single-step adversarial training.
1 code implementation • 9 Jul 2021 • Francisco Eiras, Motasem Alfarra, M. Pawan Kumar, Philip H. S. Torr, Puneet K. Dokania, Bernard Ghanem, Adel Bibi
Randomized smoothing has recently emerged as an effective tool that enables certification of deep neural network classifiers at scale.
1 code implementation • 1 Apr 2021 • Shyamgopal Karthik, Ameya Prabhu, Puneet K. Dokania, Vineet Gandhi
There has been increasing interest in building deep hierarchy-aware classifiers that aim to quantify and reduce the severity of mistakes, and not just reduce the number of errors.
no code implementations • ICLR 2021 • Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip Torr
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.
no code implementations • ICLR 2021 • Shyamgopal Karthik, Ameya Prabhu, Puneet K. Dokania, Vineet Gandhi
There has been increasing interest in building deep hierarchy-aware classifiers, aiming to quantify and reduce the severity of mistakes and not just count the number of errors.
no code implementations • pproximateinference AABI Symposium 2021 • Jishnu Mukhoti, Puneet K. Dokania, Philip H. S. Torr, Yarin Gal
We study batch normalisation in the context of variational inference methods in Bayesian neural networks, such as mean-field or MC Dropout.
1 code implementation • NeurIPS 2020 • Arslan Chaudhry, Naeemullah Khan, Puneet K. Dokania, Philip H. S. Torr
In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the continual learning experience is finished.
no code implementations • 10 Oct 2020 • Thomas Tanay, Aivar Sootla, Matteo Maggioni, Puneet K. Dokania, Philip Torr, Ales Leonardis, Gregory Slabaugh
Recurrent models are a popular choice for video enhancement tasks such as video denoising or super-resolution.
no code implementations • 8 Jul 2020 • Amartya Sanyal, Puneet K. Dokania, Varun Kanade, Philip H. S. Torr
We investigate two causes for adversarial vulnerability in deep neural networks: bad data and (poorly) trained models.
1 code implementation • ICLR 2021 • Pau de Jorge, Amartya Sanyal, Harkirat S. Behl, Philip H. S. Torr, Gregory Rogez, Puneet K. Dokania
Recent studies have shown that skeletonization (pruning parameters) of networks \textit{at initialization} provides all the practical benefits of sparsity both at inference and training time, while only marginally degrading their performance.
1 code implementation • 20 Apr 2020 • Daniela Massiceti, Viveka Kulharia, Puneet K. Dokania, N. Siddharth, Philip H. S. Torr
Evaluating Visual Dialogue, the task of answering a sequence of questions relating to a visual input, remains an open research challenge.
3 code implementations • NeurIPS 2020 • Jishnu Mukhoti, Viveka Kulharia, Amartya Sanyal, Stuart Golodetz, Philip H. S. Torr, Puneet K. Dokania
To facilitate the use of focal loss in practice, we also provide a principled approach to automatically select the hyperparameter involved in the loss function.
1 code implementation • 19 Feb 2020 • Arslan Chaudhry, Albert Gordo, Puneet K. Dokania, Philip Torr, David Lopez-Paz
In continual learning, the learner faces a stream of data whose distribution changes over time.
1 code implementation • 18 Oct 2019 • Thalaiyasingam Ajanthan, Kartik Gupta, Philip H. S. Torr, Richard Hartley, Puneet K. Dokania
Quantizing large Neural Networks (NN) while maintaining the performance is highly desirable for resource-limited devices due to reduced memory and time complexity.
1 code implementation • ICCV 2019 • Arnab Ghosh, Richard Zhang, Puneet K. Dokania, Oliver Wang, Alexei A. Efros, Philip H. S. Torr, Eli Shechtman
We propose an interactive GAN-based sketch-to-image translation method that helps novice users create images of simple objects.
no code implementations • ICLR 2020 • Amartya Sanyal, Philip H. S. Torr, Puneet K. Dokania
Exciting new work on the generalization bounds for neural networks (NN) given by Neyshabur et al. , Bartlett et al. closely depend on two parameter-depenedent quantities: the Lipschitz constant upper-bound and the stable rank (a softer version of the rank operator).
Ranked #136 on Image Generation on CIFAR-10
6 code implementations • 27 Feb 2019 • Arslan Chaudhry, Marcus Rohrbach, Mohamed Elhoseiny, Thalaiyasingam Ajanthan, Puneet K. Dokania, Philip H. S. Torr, Marc'Aurelio Ranzato
But for a successful knowledge transfer, the learner needs to remember how to perform previous tasks.
Ranked #7 on Class Incremental Learning on cifar100
2 code implementations • 16 Dec 2018 • Daniela Massiceti, Puneet K. Dokania, N. Siddharth, Philip H. S. Torr
We characterise some of the quirks and shortcomings in the exploration of Visual Dialogue - a sequential question-answering task where the questions and corresponding answers are related through given visual stimuli.
1 code implementation • ICCV 2019 • Thalaiyasingam Ajanthan, Puneet K. Dokania, Richard Hartley, Philip H. S. Torr
Compressing large Neural Networks (NN) by quantizing the parameters, while maintaining the performance is highly desirable due to reduced memory and time complexity.
no code implementations • 24 Sep 2018 • Enzo Ferrante, Puneet K. Dokania, Rafael Marini Silva, Nikos Paragios
Conventional approaches refer to the definition of a similarity criterion that, once endowed with a deformation model and a smoothness constraint, determines the optimal transformation to align two given images.
no code implementations • ICLR 2019 • Amartya Sanyal, Varun Kanade, Philip H. S. Torr, Puneet K. Dokania
To achieve low dimensionality of learned representations, we propose an easy-to-use, end-to-end trainable, low-rank regularizer (LR) that can be applied to any intermediate layer representation of a DNN.
no code implementations • CVPR 2018 • Daniela Massiceti, N. Siddharth, Puneet K. Dokania, Philip H. S. Torr
We are the first to extend this paradigm to full two-way visual dialogue, where our model is capable of generating both questions and answers in sequence based on a visual input, for which we propose a set of novel evaluation measures and metrics.
2 code implementations • ECCV 2018 • Arslan Chaudhry, Puneet K. Dokania, Thalaiyasingam Ajanthan, Philip H. S. Torr
We observe that, in addition to forgetting, a known issue while preserving knowledge, IL also suffers from a problem we call intransigence, inability of a model to update its knowledge.
no code implementations • 19 Jul 2017 • Enzo Ferrante, Puneet K. Dokania, Rafael Marini, Nikos Paragios
We propose a novel weakly supervised discriminative algorithm for learning context specific registration metrics as a linear combination of conventional similarity measures.
1 code implementation • 18 Jul 2017 • Arslan Chaudhry, Puneet K. Dokania, Philip H. S. Torr
We propose an approach to discover class-specific pixels for the weakly-supervised semantic segmentation task.
Weakly supervised Semantic Segmentation Weakly-Supervised Semantic Segmentation
1 code implementation • CVPR 2018 • Arnab Ghosh, Viveka Kulharia, Vinay Namboodiri, Philip H. S. Torr, Puneet K. Dokania
Second, to enforce that different generators capture diverse high probability modes, the discriminator of MAD-GAN is designed such that along with finding the real and fake samples, it is also required to identify the generator that generated the given fake sample.
no code implementations • 30 May 2016 • Anton Osokin, Jean-Baptiste Alayrac, Isabella Lukasewitz, Puneet K. Dokania, Simon Lacoste-Julien
In this paper, we propose several improvements on the block-coordinate Frank-Wolfe (BCFW) algorithm from Lacoste-Julien et al. (2013) recently used to optimize the structured support vector machine (SSVM) objective in the context of structured prediction, though it has wider applications.
no code implementations • ICCV 2015 • Puneet K. Dokania, M. Pawan Kumar
Furthermore, we propose an efficient graph-cuts based algorithm for the parsimonious labeling problem that provides strong theoretical guarantees on the quality of the solution.