Search Results for author: M. Pawan Kumar

Found 52 papers, 17 papers with code

Smooth Loss Functions for Deep Top-k Classification

1 code implementation ICLR 2018 Leonard Berrada, Andrew Zisserman, M. Pawan Kumar

We compare the performance of the cross-entropy loss and our margin-based losses in various regimes of noise and data size, for the predominant use case of k=5.

Classification General Classification

Trusting SVM for Piecewise Linear CNNs

2 code implementations7 Nov 2016 Leonard Berrada, Andrew Zisserman, M. Pawan Kumar

We present a novel layerwise optimization algorithm for the learning objective of Piecewise-Linear Convolutional Neural Networks (PL-CNNs), a large class of convolutional neural networks.

Make Sure You're Unsure: A Framework for Verifying Probabilistic Specifications

1 code implementation NeurIPS 2021 Leonard Berrada, Sumanth Dathathri, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Jonathan Uesato, Sven Gowal, M. Pawan Kumar

In this direction, we first introduce a general formulation of probabilistic specifications for neural networks, which captures both probabilistic networks (e. g., Bayesian neural networks, MC-Dropout networks) and uncertain inputs (distributions over inputs arising from sensor noise or other perturbations).

Adversarial Robustness Out of Distribution (OOD) Detection

Deep Frank-Wolfe For Neural Network Optimization

1 code implementation ICLR 2019 Leonard Berrada, Andrew Zisserman, M. Pawan Kumar

Furthermore, we compare our algorithm to SGD with a hand-designed learning rate schedule, and show that it provides similar generalization while converging faster.

Adaptive Neural Compilation

1 code implementation NeurIPS 2016 Rudy Bunel, Alban Desmaison, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar

We show that it is possible to compile programs written in a low-level language to a differentiable representation.

Hybrid Models for Learning to Branch

1 code implementation NeurIPS 2020 Prateek Gupta, Maxime Gasse, Elias B. Khalil, M. Pawan Kumar, Andrea Lodi, Yoshua Bengio

First, in a more realistic setting where only a CPU is available, is the GNN model still competitive?

A Unified View of Piecewise Linear Neural Network Verification

2 code implementations NeurIPS 2018 Rudy Bunel, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, M. Pawan Kumar

The success of Deep Learning and its potential use in many safety-critical applications has motivated research on formal verification of Neural Network (NN) models.

Training Neural Networks for and by Interpolation

1 code implementation ICML 2020 Leonard Berrada, Andrew Zisserman, M. Pawan Kumar

In modern supervised learning, many deep neural networks are able to interpolate the data: the empirical loss can be driven to near zero on all samples simultaneously.

In Defense of the Unitary Scalarization for Deep Multi-Task Learning

1 code implementation11 Jan 2022 Vitaly Kurin, Alessandro De Palma, Ilya Kostrikov, Shimon Whiteson, M. Pawan Kumar

We show that unitary scalarization, coupled with standard regularization and stabilization techniques from single-task learning, matches or improves upon the performance of complex multi-task optimizers in popular supervised and reinforcement learning settings.

Multi-Task Learning Reinforcement Learning (RL)

A Statistical Approach to Assessing Neural Network Robustness

1 code implementation ICLR 2019 Stefan Webb, Tom Rainforth, Yee Whye Teh, M. Pawan Kumar

Furthermore, it provides an ability to scale to larger networks than formal verification approaches.

Neural Network Branching for Neural Network Verification

1 code implementation ICLR 2020 Jingyue Lu, M. Pawan Kumar

Empirically, our framework achieves roughly $50\%$ reduction in both the number of branches and the time required for verification on various convolutional networks when compared to the best available hand-designed branching strategy.

Lagrangian Decomposition for Neural Network Verification

2 code implementations24 Feb 2020 Rudy Bunel, Alessandro De Palma, Alban Desmaison, Krishnamurthy Dvijotham, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar

Both the algorithms offer three advantages: (i) they yield bounds that are provably at least as tight as previous dual algorithms relying on Lagrangian relaxations; (ii) they are based on operations analogous to forward/backward pass of neural networks layers and are therefore easily parallelizable, amenable to GPU implementation and able to take advantage of the convolutional structure of problems; and (iii) they allow for anytime stopping while still providing valid bounds.

valid

ANCER: Anisotropic Certification via Sample-wise Volume Maximization

1 code implementation9 Jul 2021 Francisco Eiras, Motasem Alfarra, M. Pawan Kumar, Philip H. S. Torr, Puneet K. Dokania, Bernard Ghanem, Adel Bibi

Randomized smoothing has recently emerged as an effective tool that enables certification of deep neural network classifiers at scale.

A Stochastic Bundle Method for Interpolating Networks

1 code implementation29 Jan 2022 Alasdair Paren, Leonard Berrada, Rudra P. K. Poudel, M. Pawan Kumar

We propose a novel method for training deep neural networks that are capable of interpolation, that is, driving the empirical loss to zero.

IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound

1 code implementation29 Jun 2022 Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, Robert Stanforth

Recent works have tried to increase the verifiability of adversarially trained networks by running the attacks over domains larger than the original perturbations and adding various regularization terms to the objective.

Adversarial Robustness

Expressive Losses for Verified Robustness via Convex Combinations

1 code implementation23 May 2023 Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, Robert Stanforth, Alessio Lomuscio

In order to train networks for verified adversarial robustness, it is common to over-approximate the worst-case loss over perturbation regions, resulting in networks that attain verifiability at the expense of standard performance.

Adversarial Robustness

Efficient Optimization for Rank-based Loss Functions

no code implementations CVPR 2018 Pritish Mohapatra, Michal Rolinek, C. V. Jawahar, Vladimir Kolmogorov, M. Pawan Kumar

We provide a complete characterization of the loss functions that are amenable to our algorithm, and show that it includes both AP and NDCG based loss functions.

Information Retrieval Retrieval

Worst-case Optimal Submodular Extensions for Marginal Estimation

1 code implementation10 Jan 2018 Pankaj Pansari, Chris Russell, M. Pawan Kumar

Submodular extensions of an energy function can be used to efficiently compute approximate marginals via variational inference.

Variational Inference

Coplanar Repeats by Energy Minimization

no code implementations26 Nov 2017 James Pritts, Denys Rozumnyi, M. Pawan Kumar, Ondrej Chum

This paper proposes an automated method to detect, group and rectify arbitrarily-arranged coplanar repeated elements via energy minimization.

Learning to superoptimize programs

no code implementations6 Nov 2016 Rudy Bunel, Alban Desmaison, M. Pawan Kumar, Philip H. S. Torr, Pushmeet Kohli

This approach involves repeated sampling of modifications to the program from a proposal distribution, which are accepted or rejected based on whether they preserve correctness, and the improvement they achieve.

Efficient Linear Programming for Dense CRFs

no code implementations CVPR 2017 Thalaiyasingam Ajanthan, Alban Desmaison, Rudy Bunel, Mathieu Salzmann, Philip H. S. Torr, M. Pawan Kumar

To this end, we develop a proximal minimization framework, where the dual of each proximal problem is optimized via block coordinate descent.

Semantic Segmentation

Truncated Max-of-Convex Models

no code implementations CVPR 2017 Pankaj Pansari, M. Pawan Kumar

In order to minimize the energy function of a TMCM over all possible labelings, we design an efficient st-MINCUT based range expansion algorithm.

DISCO Nets: DISsimilarity COefficient Networks

no code implementations8 Jun 2016 Diane Bouchacourt, M. Pawan Kumar, Sebastian Nowozin

We present a new type of probabilistic model which we call DISsimilarity COefficient Networks (DISCO Nets).

Efficient Continuous Relaxations for Dense CRF

no code implementations22 Aug 2016 Alban Desmaison, Rudy Bunel, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar

In contrast to the continuous relaxation-based energy minimisation algorithms used for sparse CRFs, the mean-field algorithm fails to provide strong theoretical guarantees on the quality of its solutions.

Semantic Segmentation Variational Inference

Parsimonious Labeling

no code implementations ICCV 2015 Puneet K. Dokania, M. Pawan Kumar

Furthermore, we propose an efficient graph-cuts based algorithm for the parsimonious labeling problem that provides strong theoretical guarantees on the quality of the solution.

Image Denoising

Learning Human Poses from Actions

no code implementations24 Jul 2018 Aditya Arun, C. V. Jawahar, M. Pawan Kumar

In order to avoid the high cost of full supervision, we propose to use a diverse data set, which consists of two types of annotations: (i) a small number of images are labeled using the expensive ground-truth pose; and (ii) other images are labeled using the inexpensive action label.

Dissimilarity Coefficient based Weakly Supervised Object Detection

no code implementations CVPR 2019 Aditya Arun, C. V. Jawahar, M. Pawan Kumar

This allows us to use a state of the art discrete generative model that can provide annotation consistent samples from the conditional distribution.

Object object-detection +2

Rounding-based Moves for Metric Labeling

no code implementations NeurIPS 2014 M. Pawan Kumar

Metric labeling is a special case of energy minimization for pairwise Markov random fields.

Piecewise Linear Neural Networks verification: A comparative study

no code implementations ICLR 2018 Rudy Bunel, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, M. Pawan Kumar

Motivated by the need of accelerating progress in this very important area, we investigate the trade-offs of a number of different approaches based on Mixed Integer Programming, Satisfiability Modulo Theory, as well as a novel method based on the Branch-and-Bound framework.

Optimizing Average Precision using Weakly Supervised Data

no code implementations CVPR 2014 Aseem Behl, C. V. Jawahar, M. Pawan Kumar

The performance of binary classification tasks, such as action classification and object detection, is often measured in terms of the average precision (AP).

Action Classification Binary Classification +5

Entropy-Based Latent Structured Output Prediction

no code implementations ICCV 2015 Diane Bouchacourt, Sebastian Nowozin, M. Pawan Kumar

To this end, we propose a novel prediction criterion that includes as special cases all previous prediction criteria that have been used in the literature.

Structured Prediction

Branch and Bound for Piecewise Linear Neural Network Verification

no code implementations14 Sep 2019 Rudy Bunel, Jingyue Lu, Ilker Turkaslan, Philip H. S. Torr, Pushmeet Kohli, M. Pawan Kumar

We use the data sets to conduct a thorough experimental comparison of existing and new algorithms and to provide an inclusive analysis of the factors impacting the hardness of verification problems.

Weakly Supervised Instance Segmentation by Learning Annotation Consistent Instances

no code implementations ECCV 2020 Aditya Arun, C. V. Jawahar, M. Pawan Kumar

Recent approaches for weakly supervised instance segmentations depend on two components: (i) a pseudo label generation model that provides instances which are consistent with a given annotation; and (ii) an instance segmentation model, which is trained in a supervised manner using the pseudo labels as ground-truth.

Image-level Supervised Instance Segmentation Pseudo Label +3

Improving Local Effectiveness for Global Robustness Training

no code implementations1 Jan 2021 Jingyue Lu, M. Pawan Kumar

We demonstrate that, by maximizing the use of adversaries, we achieve high robust accuracy with weak adversaries.

Comment on Stochastic Polyak Step-Size: Performance of ALI-G

no code implementations20 May 2021 Leonard Berrada, Andrew Zisserman, M. Pawan Kumar

This is a short note on the performance of the ALI-G algorithm (Berrada et al., 2020) as reported in (Loizou et al., 2021).

Generating Adversarial Examples with Graph Neural Networks

no code implementations30 May 2021 Florian Jaeckle, M. Pawan Kumar

Recent years have witnessed the deployment of adversarial attacks to evaluate the robustness of Neural Networks.

Neural Network Branch-and-Bound for Neural Network Verification

no code implementations27 Jul 2021 Florian Jaeckle, Jingyue Lu, M. Pawan Kumar

Our combined framework achieves a 50\% reduction in both the number of branches and the time required for verification on various convolutional networks when compared to several state-of-the-art verification methods.

valid

Faking Interpolation Until You Make It

no code implementations29 Sep 2021 Alasdair Paren, Rudra Poudel, M. Pawan Kumar

We introduce a novel extension of this idea to tasks where the interpolation property does not hold.

Improving Local Effectiveness for Global robust training

no code implementations26 Oct 2021 Jingyue Lu, M. Pawan Kumar

However, many of them rely on strong adversaries, which can be prohibitively expensive to generate when the input dimension is high and the model structure is complicated.

Overcoming the Convex Barrier for Simplex Inputs

no code implementations NeurIPS 2021 Harkirat Singh Behl, M. Pawan Kumar, Philip Torr, Krishnamurthy Dvijotham

Recent progress in neural network verification has challenged the notion of a convex barrier, that is, an inherent weakness in the convex relaxation of the output of a neural network.

Learning to be adversarially robust and differentially private

no code implementations6 Jan 2022 Jamie Hayes, Borja Balle, M. Pawan Kumar

We study the difficulties in learning that arise from robust and differentially private optimization.

Binary Classification

Lookback for Learning to Branch

no code implementations30 Jun 2022 Prateek Gupta, Elias B. Khalil, Didier Chetélat, Maxime Gasse, Yoshua Bengio, Andrea Lodi, M. Pawan Kumar

Given that B&B results in a tree of sub-MILPs, we ask (a) whether there are strong dependencies exhibited by the target heuristic among the neighboring nodes of the B&B tree, and (b) if so, whether we can incorporate them in our training procedure.

Model Selection Variable Selection

Provably Correct Physics-Informed Neural Networks

no code implementations17 May 2023 Francisco Eiras, Adel Bibi, Rudy Bunel, Krishnamurthy Dj Dvijotham, Philip Torr, M. Pawan Kumar

Recent work provides promising evidence that Physics-informed neural networks (PINN) can efficiently solve partial differential equations (PDE).

Faithful Knowledge Distillation

no code implementations7 Jun 2023 Tom A. Lamb, Rudy Brunel, Krishnamurthy Dj Dvijotham, M. Pawan Kumar, Philip H. S. Torr, Francisco Eiras

To address these questions, we introduce a faithful imitation framework to discuss the relative calibration of confidences and provide empirical and certified methods to evaluate the relative calibration of a student w. r. t.

Adversarial Robustness Knowledge Distillation

Cannot find the paper you are looking for? You can Submit a new open access paper.