Search Results for author: Krishnamurthy Dvijotham

Found 20 papers, 8 papers with code

Role of Human-AI Interaction in Selective Prediction

1 code implementation13 Dec 2021 Elizabeth Bondi, Raphael Koster, Hannah Sheahan, Martin Chadwick, Yoram Bachrach, Taylan Cemgil, Ulrich Paquet, Krishnamurthy Dvijotham

Using real-world conservation data and a selective prediction system that improves expected accuracy over that of the human or AI system working individually, we show that this messaging has a significant impact on the accuracy of human judgements.

Overcoming the Convex Barrier for Simplex Inputs

no code implementations NeurIPS 2021 Harkirat Singh Behl, M. Pawan Kumar, Philip Torr, Krishnamurthy Dvijotham

Recent progress in neural network verification has challenged the notion of a convex barrier, that is, an inherent weakness in the convex relaxation of the output of a neural network.

A Fine-Grained Analysis on Distribution Shift

no code implementations ICLR 2022 Olivia Wiles, Sven Gowal, Florian Stimberg, Sylvestre Alvise-Rebuffi, Ira Ktena, Krishnamurthy Dvijotham, Taylan Cemgil

Despite this necessity, there has been little work in defining the underlying mechanisms that cause these shifts and evaluating the robustness of algorithms across multiple, different distribution shifts.

Make Sure You're Unsure: A Framework for Verifying Probabilistic Specifications

1 code implementation NeurIPS 2021 Leonard Berrada, Sumanth Dathathri, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Jonathan Uesato, Sven Gowal, M. Pawan Kumar

In this direction, we first introduce a general formulation of probabilistic specifications for neural networks, which captures both probabilistic networks (e. g., Bayesian neural networks, MC-Dropout networks) and uncertain inputs (distributions over inputs arising from sensor noise or other perturbations).

Adversarial Robustness OOD Detection

Autoencoding Variational Autoencoder

1 code implementation7 Dec 2020 A. Taylan Cemgil, Sumedh Ghaisas, Krishnamurthy Dvijotham, Sven Gowal, Pushmeet Kohli

We provide experimental results on the ColorMnist and CelebA benchmark datasets that quantify the properties of the learned representations and compare the approach with a baseline that is specifically trained for the desired property.

The Autoencoding Variational Autoencoder

no code implementations NeurIPS 2020 Taylan Cemgil, Sumedh Ghaisas, Krishnamurthy Dvijotham, Sven Gowal, Pushmeet Kohli

We provide experimental results on the ColorMnist and CelebA benchmark datasets that quantify the properties of the learned representations and compare the approach with a baseline that is specifically trained for the desired property.

Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming

2 code implementations NeurIPS 2020 Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, aditi raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy Liang, Pushmeet Kohli

In this work, we propose a first-order dual SDP algorithm that (1) requires memory only linear in the total number of network activations, (2) only requires a fixed number of forward/backward passes through the network per iteration.

Lagrangian Decomposition for Neural Network Verification

2 code implementations24 Feb 2020 Rudy Bunel, Alessandro De Palma, Alban Desmaison, Krishnamurthy Dvijotham, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar

Both the algorithms offer three advantages: (i) they yield bounds that are provably at least as tight as previous dual algorithms relying on Lagrangian relaxations; (ii) they are based on operations analogous to forward/backward pass of neural networks layers and are therefore easily parallelizable, amenable to GPU implementation and able to take advantage of the convolutional structure of problems; and (iii) they allow for anytime stopping while still providing valid bounds.

Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations

no code implementations CVPR 2020 Sven Gowal, Chongli Qin, Po-Sen Huang, Taylan Cemgil, Krishnamurthy Dvijotham, Timothy Mann, Pushmeet Kohli

Specifically, we leverage the disentangled latent representations computed by a StyleGAN model to generate perturbations of an image that are similar to real-world variations (like adding make-up, or changing the skin-tone of a person) and train models to be invariant to these perturbations.

Provenance detection through learning transformation-resilient watermarking

no code implementations25 Sep 2019 Jamie Hayes, Krishnamurthy Dvijotham, Yutian Chen, Sander Dieleman, Pushmeet Kohli, Norman Casagrande

In this paper, we introduce ReSWAT (Resilient Signal Watermarking via Adversarial Training), a framework for learning transformation-resilient watermark detectors that are able to detect a watermark even after a signal has been through several post-processing transformations.

Adversarial Robustness through Local Linearization

no code implementations NeurIPS 2019 Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, Pushmeet Kohli

Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with l-infinity adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack.

Adversarial Defense Adversarial Robustness

Verification of deep probabilistic models

no code implementations6 Dec 2018 Krishnamurthy Dvijotham, Marta Garnelo, Alhussein Fawzi, Pushmeet Kohli

For example, a machine translation model should produce semantically equivalent outputs for innocuous changes in the input to the model.

Machine Translation Translation

On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models

9 code implementations30 Oct 2018 Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, Pushmeet Kohli

Recent work has shown that it is possible to train deep neural networks that are provably robust to norm-bounded adversarial perturbations.

Training verified learners with learned verifiers

no code implementations25 May 2018 Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, Pushmeet Kohli

This paper proposes a new algorithmic framework, predictor-verifier training, to train neural networks that are verifiable, i. e., networks that provably satisfy some desired input-output properties.

Safe Exploration in Continuous Action Spaces

3 code implementations26 Jan 2018 Gal Dalal, Krishnamurthy Dvijotham, Matej Vecerik, Todd Hester, Cosmin Paduraru, Yuval Tassa

We address the problem of deploying a reinforcement learning (RL) agent on a physical system such as a datacenter cooling unit or robot, where critical constraints must never be violated.

Safe Exploration

Graphical Models for Optimal Power Flow

no code implementations21 Jun 2016 Krishnamurthy Dvijotham, Pascal Van Hentenryck, Michael Chertkov, Sidhant Misra, Marc Vuffray

In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors.

Universal Convexification via Risk-Aversion

no code implementations3 Jun 2014 Krishnamurthy Dvijotham, Maryam Fazel, Emanuel Todorov

We develop a framework for convexifying a fairly general class of optimization problems.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.