Search Results for author: Krishnamurthy Dvijotham

Found 37 papers, 14 papers with code

Achieving the Tightest Relaxation of Sigmoids for Formal Verification

no code implementations20 Aug 2024 Samuel Chevalier, Duncan Starkenburg, Krishnamurthy Dvijotham

In the field of formal verification, Neural Networks (NNs) are typically reformulated into equivalent mathematical programs which are optimized over.

Verified Neural Compressed Sensing

no code implementations7 May 2024 Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, Alessandro De Palma, Robert Stanforth

Furthermore, we show that the complexity of the network (number of neurons/layers) can be adapted to the problem difficulty and solve problems where traditional compressed sensing methods are not known to provably work.

Efficient and Near-Optimal Noise Generation for Streaming Differential Privacy

no code implementations25 Apr 2024 Krishnamurthy Dvijotham, H. Brendan McMahan, Krishna Pillutla, Thomas Steinke, Abhradeep Thakurta

Existing algorithms for differentially private continual counting are either inefficient in terms of their space usage or add an excessive amount of noise, inducing suboptimal utility.

Confidence-aware Reward Optimization for Fine-tuning Text-to-Image Models

1 code implementation2 Apr 2024 KyuYoung Kim, Jongheon Jeong, Minyong An, Mohammad Ghavamzadeh, Krishnamurthy Dvijotham, Jinwoo Shin, Kimin Lee

To investigate this issue in depth, we introduce the Text-Image Alignment Assessment (TIA2) benchmark, which comprises a diverse collection of text prompts, images, and human annotations.

Private Gradient Descent for Linear Regression: Tighter Error Bounds and Instance-Specific Uncertainty Estimation

no code implementations21 Feb 2024 Gavin Brown, Krishnamurthy Dvijotham, Georgina Evans, Daogao Liu, Adam Smith, Abhradeep Thakurta

We provide an improved analysis of standard differentially private gradient descent for linear regression under the squared error loss.

Monotone, Bi-Lipschitz, and Polyak-Lojasiewicz Networks

no code implementations2 Feb 2024 Ruigang Wang, Krishnamurthy Dvijotham, Ian R. Manchester

This paper presents a new bi-Lipschitz invertible neural network, the BiLipNet, which has the ability to smoothly control both its Lipschitzness (output sensitivity to input perturbations) and inverse Lipschitzness (input distinguishability from different outputs).

MINT: A wrapper to make multi-modal and multi-image AI models interactive

no code implementations22 Jan 2024 Jan Freyberg, Abhijit Guha Roy, Terry Spitz, Beverly Freeman, Mike Schaekermann, Patricia Strachan, Eva Schnider, Renee Wong, Dale R Webster, Alan Karthikesalingam, Yun Liu, Krishnamurthy Dvijotham, Umesh Telang

In this paper we tackle a more subtle challenge: doctors take a targeted medical history to obtain only the most pertinent pieces of information; how do we enable AI to do the same?

Disease Prediction

Correlated Noise Provably Beats Independent Noise for Differentially Private Learning

no code implementations10 Oct 2023 Christopher A. Choquette-Choo, Krishnamurthy Dvijotham, Krishna Pillutla, Arun Ganesh, Thomas Steinke, Abhradeep Thakurta

We characterize the asymptotic learning utility for any choice of the correlation function, giving precise analytical bounds for linear regression and as the solution to a convex program for general convex functions.

Learning to Receive Help: Intervention-Aware Concept Embedding Models

1 code implementation NeurIPS 2023 Mateo Espinosa Zarlenga, Katherine M. Collins, Krishnamurthy Dvijotham, Adrian Weller, Zohreh Shams, Mateja Jamnik

To address this, we propose Intervention-aware Concept Embedding models (IntCEMs), a novel CBM-based architecture and training paradigm that improves a model's receptiveness to test-time interventions.

Selective Concept Models: Permitting Stakeholder Customisation at Test-Time

no code implementations14 Jun 2023 Matthew Barker, Katherine M. Collins, Krishnamurthy Dvijotham, Adrian Weller, Umang Bhatt

Concept-based models perform prediction using a set of concepts that are interpretable to stakeholders.

Training Private Models That Know What They Don't Know

no code implementations28 May 2023 Stephan Rabanser, Anvith Thudi, Abhradeep Thakurta, Krishnamurthy Dvijotham, Nicolas Papernot

Training reliable deep learning models which avoid making overconfident but incorrect predictions is a longstanding challenge.

Expressive Losses for Verified Robustness via Convex Combinations

1 code implementation23 May 2023 Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, Robert Stanforth, Alessio Lomuscio

In order to train networks for verified adversarial robustness, it is common to over-approximate the worst-case loss over perturbation regions, resulting in networks that attain verifiability at the expense of standard performance.

Adversarial Robustness

Human Uncertainty in Concept-Based AI Systems

no code implementations22 Mar 2023 Katherine M. Collins, Matthew Barker, Mateo Espinosa Zarlenga, Naveen Raman, Umang Bhatt, Mateja Jamnik, Ilia Sucholutsky, Adrian Weller, Krishnamurthy Dvijotham

We study how existing concept-based models deal with uncertain interventions from humans using two novel datasets: UMNIST, a visual dataset with controlled simulated uncertainty based on the MNIST dataset, and CUB-S, a relabeling of the popular CUB concept dataset with rich, densely-annotated soft labels from humans.

Decision Making

Provably Bounding Neural Network Preimages

3 code implementations NeurIPS 2023 Suhas Kotha, Christopher Brix, Zico Kolter, Krishnamurthy Dvijotham, huan zhang

Most work on the formal verification of neural networks has focused on bounding the set of outputs that correspond to a given set of inputs (for example, bounded perturbations of a nominal input).

Adversarial Robustness

Interactive Concept Bottleneck Models

1 code implementation14 Dec 2022 Kushal Chauhan, Rishabh Tiwari, Jan Freyberg, Pradeep Shenoy, Krishnamurthy Dvijotham

Concept bottleneck models (CBMs) are interpretable neural networks that first predict labels for human-interpretable concepts relevant to the prediction task, and then predict the final label based on the concept label predictions.

IBP Regularization for Verified Adversarial Robustness via Branch-and-Bound

1 code implementation29 Jun 2022 Alessandro De Palma, Rudy Bunel, Krishnamurthy Dvijotham, M. Pawan Kumar, Robert Stanforth

Recent works have tried to increase the verifiability of adversarially trained networks by running the attacks over domains larger than the original perturbations and adding various regularization terms to the objective.

Adversarial Robustness

Role of Human-AI Interaction in Selective Prediction

1 code implementation13 Dec 2021 Elizabeth Bondi, Raphael Koster, Hannah Sheahan, Martin Chadwick, Yoram Bachrach, Taylan Cemgil, Ulrich Paquet, Krishnamurthy Dvijotham

Using real-world conservation data and a selective prediction system that improves expected accuracy over that of the human or AI system working individually, we show that this messaging has a significant impact on the accuracy of human judgements.

Overcoming the Convex Barrier for Simplex Inputs

no code implementations NeurIPS 2021 Harkirat Singh Behl, M. Pawan Kumar, Philip Torr, Krishnamurthy Dvijotham

Recent progress in neural network verification has challenged the notion of a convex barrier, that is, an inherent weakness in the convex relaxation of the output of a neural network.

A Fine-Grained Analysis on Distribution Shift

no code implementations ICLR 2022 Olivia Wiles, Sven Gowal, Florian Stimberg, Sylvestre Alvise-Rebuffi, Ira Ktena, Krishnamurthy Dvijotham, Taylan Cemgil

Despite this necessity, there has been little work in defining the underlying mechanisms that cause these shifts and evaluating the robustness of algorithms across multiple, different distribution shifts.

Make Sure You're Unsure: A Framework for Verifying Probabilistic Specifications

1 code implementation NeurIPS 2021 Leonard Berrada, Sumanth Dathathri, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Jonathan Uesato, Sven Gowal, M. Pawan Kumar

In this direction, we first introduce a general formulation of probabilistic specifications for neural networks, which captures both probabilistic networks (e. g., Bayesian neural networks, MC-Dropout networks) and uncertain inputs (distributions over inputs arising from sensor noise or other perturbations).

Adversarial Robustness Out of Distribution (OOD) Detection

Autoencoding Variational Autoencoder

1 code implementation7 Dec 2020 A. Taylan Cemgil, Sumedh Ghaisas, Krishnamurthy Dvijotham, Sven Gowal, Pushmeet Kohli

We provide experimental results on the ColorMnist and CelebA benchmark datasets that quantify the properties of the learned representations and compare the approach with a baseline that is specifically trained for the desired property.

Decoder

The Autoencoding Variational Autoencoder

no code implementations NeurIPS 2020 Taylan Cemgil, Sumedh Ghaisas, Krishnamurthy Dvijotham, Sven Gowal, Pushmeet Kohli

We provide experimental results on the ColorMnist and CelebA benchmark datasets that quantify the properties of the learned representations and compare the approach with a baseline that is specifically trained for the desired property.

Decoder

Enabling certification of verification-agnostic networks via memory-efficient semidefinite programming

2 code implementations NeurIPS 2020 Sumanth Dathathri, Krishnamurthy Dvijotham, Alexey Kurakin, aditi raghunathan, Jonathan Uesato, Rudy Bunel, Shreya Shankar, Jacob Steinhardt, Ian Goodfellow, Percy Liang, Pushmeet Kohli

In this work, we propose a first-order dual SDP algorithm that (1) requires memory only linear in the total number of network activations, (2) only requires a fixed number of forward/backward passes through the network per iteration.

Decoder

Lagrangian Decomposition for Neural Network Verification

2 code implementations24 Feb 2020 Rudy Bunel, Alessandro De Palma, Alban Desmaison, Krishnamurthy Dvijotham, Pushmeet Kohli, Philip H. S. Torr, M. Pawan Kumar

Both the algorithms offer three advantages: (i) they yield bounds that are provably at least as tight as previous dual algorithms relying on Lagrangian relaxations; (ii) they are based on operations analogous to forward/backward pass of neural networks layers and are therefore easily parallelizable, amenable to GPU implementation and able to take advantage of the convolutional structure of problems; and (iii) they allow for anytime stopping while still providing valid bounds.

valid

Achieving Robustness in the Wild via Adversarial Mixing with Disentangled Representations

no code implementations CVPR 2020 Sven Gowal, Chongli Qin, Po-Sen Huang, Taylan Cemgil, Krishnamurthy Dvijotham, Timothy Mann, Pushmeet Kohli

Specifically, we leverage the disentangled latent representations computed by a StyleGAN model to generate perturbations of an image that are similar to real-world variations (like adding make-up, or changing the skin-tone of a person) and train models to be invariant to these perturbations.

Provenance detection through learning transformation-resilient watermarking

no code implementations25 Sep 2019 Jamie Hayes, Krishnamurthy Dvijotham, Yutian Chen, Sander Dieleman, Pushmeet Kohli, Norman Casagrande

In this paper, we introduce ReSWAT (Resilient Signal Watermarking via Adversarial Training), a framework for learning transformation-resilient watermark detectors that are able to detect a watermark even after a signal has been through several post-processing transformations.

Adversarial Robustness through Local Linearization

no code implementations NeurIPS 2019 Chongli Qin, James Martens, Sven Gowal, Dilip Krishnan, Krishnamurthy Dvijotham, Alhussein Fawzi, Soham De, Robert Stanforth, Pushmeet Kohli

Using this regularizer, we exceed current state of the art and achieve 47% adversarial accuracy for ImageNet with l-infinity adversarial perturbations of radius 4/255 under an untargeted, strong, white-box attack.

Adversarial Defense Adversarial Robustness

Verification of deep probabilistic models

no code implementations6 Dec 2018 Krishnamurthy Dvijotham, Marta Garnelo, Alhussein Fawzi, Pushmeet Kohli

For example, a machine translation model should produce semantically equivalent outputs for innocuous changes in the input to the model.

Machine Translation Translation

On the Effectiveness of Interval Bound Propagation for Training Verifiably Robust Models

9 code implementations30 Oct 2018 Sven Gowal, Krishnamurthy Dvijotham, Robert Stanforth, Rudy Bunel, Chongli Qin, Jonathan Uesato, Relja Arandjelovic, Timothy Mann, Pushmeet Kohli

Recent work has shown that it is possible to train deep neural networks that are provably robust to norm-bounded adversarial perturbations.

Training verified learners with learned verifiers

no code implementations25 May 2018 Krishnamurthy Dvijotham, Sven Gowal, Robert Stanforth, Relja Arandjelovic, Brendan O'Donoghue, Jonathan Uesato, Pushmeet Kohli

This paper proposes a new algorithmic framework, predictor-verifier training, to train neural networks that are verifiable, i. e., networks that provably satisfy some desired input-output properties.

Safe Exploration in Continuous Action Spaces

6 code implementations26 Jan 2018 Gal Dalal, Krishnamurthy Dvijotham, Matej Vecerik, Todd Hester, Cosmin Paduraru, Yuval Tassa

We address the problem of deploying a reinforcement learning (RL) agent on a physical system such as a datacenter cooling unit or robot, where critical constraints must never be violated.

Reinforcement Learning Reinforcement Learning (RL) +1

Graphical Models for Optimal Power Flow

no code implementations21 Jun 2016 Krishnamurthy Dvijotham, Pascal Van Hentenryck, Michael Chertkov, Sidhant Misra, Marc Vuffray

In this paper, we formulate the optimal power flow problem over tree networks as an inference problem over a tree-structured graphical model where the nodal variables are low-dimensional vectors.

Universal Convexification via Risk-Aversion

no code implementations3 Jun 2014 Krishnamurthy Dvijotham, Maryam Fazel, Emanuel Todorov

We develop a framework for convexifying a fairly general class of optimization problems.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.