no code implementations • 12 Jun 2024 • Francisco Eiras, Aleksandar Petrov, Phillip H. S. Torr, M. Pawan Kumar, Adel Bibi
Fine-tuning large language models on small, high-quality datasets can enhance their performance on specific downstream tasks.
no code implementations • 7 May 2024 • Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, Alessandro De Palma, Robert Stanforth
Furthermore, we show that the complexity of the network (number of neurons/layers) can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.
no code implementations • 7 Jun 2023 • Tom A. Lamb, Rudy Brunel, Krishnamurthy Dj Dvijotham, M. Pawan Kumar, Philip H. S. Torr, Francisco Eiras
To address these questions, we introduce a faithful imitation framework to discuss the relative calibration of confidences and provide empirical and certified methods to evaluate the relative calibration of a student w. r. t.
1 code implementation • 23 May 2023 • Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, Robert Stanforth, Alessio Lomuscio
In order to train networks for verified adversarial robustness, it is common to over-approximate the worst-case loss over perturbation regions, resulting in networks that attain verifiability at the expense of standard performance.
no code implementations • 17 May 2023 • Francisco Eiras, Adel Bibi, Rudy Bunel, Krishnamurthy Dj Dvijotham, Philip Torr, M. Pawan Kumar
Recent work provides promising evidence that Physics-Informed Neural Networks (PINN) can efficiently solve partial differential equations (PDE).
no code implementations • 30 Jun 2022 • Prateek Gupta, Elias B. Khalil, Didier Chetélat, Maxime Gasse, Yoshua Bengio, Andrea Lodi, M. Pawan Kumar
Given that B&B results in a tree of sub-MILPs, we ask (a) whether there are strong dependencies exhibited by the target heuristic among the neighboring nodes of the B&B tree, and (b) if so, whether we can incorporate them in our training procedure.
1 code implementation • 29 Jun 2022 • Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, Robert Stanforth
Recent works have tried to increase the verifiability of adversarially trained networks by running the attacks over domains larger than the original perturbations and adding various regularization terms to the objective.
1 code implementation • 29 Jan 2022 • Alasdair Paren, Leonard Berrada, Rudra P. K. Poudel, M. Pawan Kumar
We propose a novel method for training deep neural networks that are capable of interpolation, that is, driving the empirical loss to zero.
1 code implementation • 11 Jan 2022 • Vitaly Kurin, Alessandro De Palma, Ilya Kostrikov, Shimon Whiteson, M. Pawan Kumar
We show that unitary scalarization, coupled with standard regularization and stabilization techniques from single-task learning, matches or improves upon the performance of complex multi-task optimizers in popular supervised and reinforcement learning settings.
no code implementations • 6 Jan 2022 • Jamie Hayes, Borja Balle, M. Pawan Kumar
We study the difficulties in learning that arise from robust and differentially private optimization.
no code implementations • NeurIPS 2021 • Harkirat Singh Behl, M. Pawan Kumar, Philip Torr, Krishnamurthy Dvijotham
Recent progress in neural network verification has challenged the notion of a convex barrier, that is, an inherent weakness in the convex relaxation of the output of a neural network.
no code implementations • AAAI Workshop AdvML 2022 • Florian Jaeckle, Aleksandr Agadzhanov, Jingyue Lu, M. Pawan Kumar
The GNN outputs a smaller subspace for the PGD attack to focus on.
no code implementations • 26 Oct 2021 • Jingyue Lu, M. Pawan Kumar
However, many of them rely on strong adversaries, which can be prohibitively expensive to generate when the input dimension is high and the model structure is complicated.
no code implementations • 29 Sep 2021 • Alasdair Paren, Rudra Poudel, M. Pawan Kumar
We introduce a novel extension of this idea to tasks where the interpolation property does not hold.
no code implementations • 27 Jul 2021 • Florian Jaeckle, Jingyue Lu, M. Pawan Kumar
Our combined framework achieves a 50\% reduction in both the number of branches and the time required for verification on various convolutional networks when compared to several state-of-the-art verification methods.
1 code implementation • 9 Jul 2021 • Francisco Eiras, Motasem Alfarra, M. Pawan Kumar, Philip H. S. Torr, Puneet K. Dokania, Bernard Ghanem, Adel Bibi
Randomized smoothing has recently emerged as an effective tool that enables certification of deep neural network classifiers at scale.
no code implementations • 30 May 2021 • Florian Jaeckle, M. Pawan Kumar
Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neural Networks.
no code implementations • 20 May 2021 • Leonard Berrada, Andrew Zisserman, M. Pawan Kumar
This is a short note on the performance of the ALI-G algorithm (Berrada et al., 2020) as reported in (Loizou et al., 2021).
no code implementations • 14 Apr 2021 • Alessandro De Palma, Rudy Bunel, Alban Desmaison, Krishnamurthy Dvijotham, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar
Finally, we design a BaB framework, named Branch and Dual Network Bound (BaDNB), based on our novel bounding and branching algorithms.
1 code implementation • NeurIPS 2021 • Leonard Berrada, Sumanth Dathathri, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Jonathan Uesato, Sven Gowal, M. Pawan Kumar
In this direction, we first introduce a general formulation of probabilistic specifications for neural networks, which captures both probabilistic networks (e. g., Bayesian neural networks, MC-Dropout networks) and uncertain inputs (distributions over inputs arising from sensor noise or other perturbations).
no code implementations • ICLR 2021 • Alessandro De Palma, Harkirat Singh Behl, Rudy Bunel, Philip H. S. Torr, M. Pawan Kumar
Tight and efficient neural network bounding is crucial to the scaling of neural network verification systems.
no code implementations • 1 Jan 2021 • Jingyue Lu, M. Pawan Kumar
We demonstrate that, by maximizing the use of adversaries, we achieve high robust accuracy with weak adversaries.
no code implementations • ECCV 2020 • Aditya Arun, C. V. Jawahar, M. Pawan Kumar
Recent approaches for weakly supervised instance segmentations depend on two components: (i) a pseudo label generation model that provides instances which are consistent with a given annotation; and (ii) an instance segmentation model, which is trained in a supervised manner using the pseudo labels as ground-truth.
Ranked #6 on
Image-level Supervised Instance Segmentation
on PASCAL VOC 2012 val
(using extra training data)
Image-level Supervised Instance Segmentation
Pseudo Label
+3
1 code implementation • NeurIPS 2020 • Prateek Gupta, Maxime Gasse, Elias B. Khalil, M. Pawan Kumar, Andrea Lodi, Yoshua Bengio
First, in a more realistic setting where only a CPU is available, is the GNN model still competitive?
2 code implementations • 24 Feb 2020 • Rudy Bunel, Alessandro De Palma, Alban Desmaison, Krishnamurthy Dvijotham, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar
Both the algorithms offer three advantages: (i) they yield bounds that are provably at least as tight as previous dual algorithms relying on Lagrangian relaxations; (ii) they are based on operations analogous to forward/backward pass of neural networks layers and are therefore easily parallelizable, amenable to GPU implementation and able to take advantage of the convolutional structure of problems; and (iii) they allow for anytime stopping while still providing valid bounds.
1 code implementation • ICLR 2020 • Jingyue Lu, M. Pawan Kumar
Empirically, our framework achieves roughly $50\%$ reduction in both the number of branches and the time required for verification on various convolutional networks when compared to the best available hand-designed branching strategy.
no code implementations • 14 Sep 2019 • Rudy Bunel, Jingyue Lu, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, M. Pawan Kumar
We use the data sets to conduct a thorough experimental comparison of existing and new algorithms and to provide an inclusive analysis of the factors impacting the hardness of verification problems.
1 code implementation • ICML 2020 • Leonard Berrada, Andrew Zisserman, M. Pawan Kumar
In modern supervised learning, many deep neural networks are able to interpolate the data: the empirical loss can be driven to near zero on all samples simultaneously.
no code implementations • CVPR 2019 • Aditya Arun, C. V. Jawahar, M. Pawan Kumar
This allows us to use a state of the art discrete generative model that can provide annotation consistent samples from the conditional distribution.
1 code implementation • ICLR 2019 • Leonard Berrada, Andrew Zisserman, M. Pawan Kumar
Furthermore, we compare our algorithm to SGD with a hand-designed learning rate schedule, and show that it provides similar generalization while converging faster.
1 code implementation • ICLR 2019 • Stefan Webb, Tom Rainforth, Yee Whye Teh, M. Pawan Kumar
Furthermore, it provides an ability to scale to larger networks than formal verification approaches.
no code implementations • 24 Jul 2018 • Aditya Arun, C. V. Jawahar, M. Pawan Kumar
In order to avoid the high cost of full supervision, we propose to use a diverse data set, which consists of two types of annotations: (i) a small number of images are labeled using the expensive ground-truth pose; and (ii) other images are labeled using the inexpensive action label.
no code implementations • 23 May 2018 • Thomas Joy, Alban Desmaison, Thalaiyasingam Ajanthan, Rudy Bunel, Mathieu Salzmann, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar
The presented algorithms can be applied to any labelling problem using a dense CRF with sparse higher-order potentials.
1 code implementation • ICLR 2018 • Leonard Berrada, Andrew Zisserman, M. Pawan Kumar
We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of k=5.
1 code implementation • 10 Jan 2018 • Pankaj Pansari, Chris Russell, M. Pawan Kumar
Submodular extensions of an energy function can be used to efficiently compute approximate marginals via variational inference.
no code implementations • ICLR 2018 • Rudy Bunel, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, M. Pawan Kumar
Motivated by the need of accelerating progress in this very important area, we investigate the trade-offs of a number of different approaches based on Mixed Integer Programming, Satisfiability Modulo Theory, as well as a novel method based on the Branch-and-Bound framework.
no code implementations • 26 Nov 2017 • James Pritts, Denys Rozumnyi, M. Pawan Kumar, Ondrej Chum
This paper proposes an automated method to detect, group and rectify arbitrarily-arranged coplanar repeated elements via energy minimization.
2 code implementations • NeurIPS 2018 • Rudy Bunel, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, M. Pawan Kumar
The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network (NN) models.
no code implementations • 4 Dec 2016 • Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H. S. Torr, Pushmeet Kohli
Superoptimization requires the estimation of the best program for a given computational task.
no code implementations • CVPR 2017 • Thalaiyasingam Ajanthan, Alban Desmaison, Rudy Bunel, Mathieu Salzmann, Philip H. S. Torr, M. Pawan Kumar
To this end, we develop a proximal minimization framework, where the dual of each proximal problem is optimized via block coordinate descent.
2 code implementations • 7 Nov 2016 • Leonard Berrada, Andrew Zisserman, M. Pawan Kumar
We present a novel layerwise optimization algorithm for the learning objective of Piecewise-Linear Convolutional Neural Networks (PL-CNNs), a large class of convolutional neural networks.
no code implementations • 6 Nov 2016 • Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H. S. Torr, Pushmeet Kohli
This approach involves repeated sampling of modifications to the program from a proposal distribution, which are accepted or rejected based on whether they preserve correctness, and the improvement they achieve.
no code implementations • 22 Aug 2016 • Alban Desmaison, Rudy Bunel, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar
In contrast to the continuous relaxation-based energy minimisation algorithms used for sparse CRFs, the mean-field algorithm fails to provide strong theoretical guarantees on the quality of its solutions.
no code implementations • 8 Jun 2016 • Diane Bouchacourt, M. Pawan Kumar, Sebastian Nowozin
We present a new type of probabilistic model which we call DISsimilarity COefficient Networks (DISCO Nets).
1 code implementation • NeurIPS 2016 • Rudy Bunel, Alban Desmaison, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar
We show that it is possible to compile programs written in a low-level language to a differentiable representation.
no code implementations • CVPR 2018 • Pritish Mohapatra, Michal Rolinek, C. V. Jawahar, Vladimir Kolmogorov, M. Pawan Kumar
We provide a complete characterization of the loss functions that are amenable to our algorithm, and show that it includes both AP and NDCG based loss functions.
no code implementations • CVPR 2017 • Pankaj Pansari, M. Pawan Kumar
In order to minimize the energy function of a TMCM over all possible labelings, we design an efficient st-MINCUT based range expansion algorithm.
no code implementations • ICCV 2015 • Diane Bouchacourt, Sebastian Nowozin, M. Pawan Kumar
To this end, we propose a novel prediction criterion that includes as special cases all previous prediction criteria that have been used in the literature.
no code implementations • ICCV 2015 • Puneet K. Dokania, M. Pawan Kumar
Furthermore, we propose an efficient graph-cuts based algorithm for the parsimonious labeling problem that provides strong theoretical guarantees on the quality of the solution.
no code implementations • NeurIPS 2014 • Pritish Mohapatra, C. V. Jawahar, M. Pawan Kumar
The accuracy of information retrieval systems is often measured using average precision (AP).
no code implementations • NeurIPS 2014 • M. Pawan Kumar
Metric labeling is a special case of energy minimization for pairwise Markov random fields.
no code implementations • CVPR 2014 • Aseem Behl, C. V. Jawahar, M. Pawan Kumar
The performance of binary classification tasks, such as action classification and object detection, is often measured in terms of the average precision (AP).
no code implementations • 30 Aug 2013 • Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Noura Azzabou, Pierre G. Carlier, Nikos Paragios, M. Pawan Kumar
However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned.
no code implementations • 5 Jun 2013 • Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Noura Azzabou, Pierre G. Carlier, Nikos Paragios, M. Pawan Kumar
However, one of the main drawbacks of using the RW algorithm is that its parameters have to be hand-tuned.