Search Results for author: Bhavya Kailkhura

Found 50 papers, 16 papers with code

G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators

1 code implementation NeurIPS 2021 Yunhui Long, Boxin Wang, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl Gunter, Bo Li

In particular, we train a student data generator with an ensemble of teacher discriminators and propose a novel private gradient aggregation mechanism to ensure differential privacy on all information that flows from teacher discriminators to the student generator.

On the Certified Robustness for Ensemble Models and Beyond

no code implementations22 Jul 2021 Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, Bo Li

Thus, to explore the conditions that guarantee to provide certifiably robust ensemble ML models, we first prove that diversified gradient and large confidence margin are sufficient and necessary conditions for certifiably robust ensemble models under the model-smoothness assumption.

Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning

1 code implementation NeurIPS 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target.

Data Poisoning Domain Generalization +1

Reliable Graph Neural Network Explanations Through Adversarial Training

no code implementations25 Jun 2021 Donald Loveland, Shusen Liu, Bhavya Kailkhura, Anna Hiszpanski, Yong Han

Graph neural network (GNN) explanations have largely been facilitated through post-hoc introspection.

Mixture of Robust Experts (MoRE):A Robust Denoising Method towards multiple perturbations

no code implementations21 Apr 2021 Kaidi Xu, Chenan Wang, Hao Cheng, Bhavya Kailkhura, Xue Lin, Ryan Goldhahn

To tackle the susceptibility of deep neural networks to examples, the adversarial training has been proposed which provides a notion of robust through an inner maximization problem presenting the first-order embedded within the outer minimization of the training loss.

Adversarial Robustness Denoising

Certifiably-Robust Federated Adversarial Learning via Randomized Smoothing

no code implementations30 Mar 2021 Cheng Chen, Bhavya Kailkhura, Ryan Goldhahn, Yi Zhou

Federated learning is an emerging data-private distributed learning framework, which, however, is vulnerable to adversarial attacks.

Federated Learning

Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network

1 code implementation17 Mar 2021 James Diffenderfer, Bhavya Kailkhura

In this paper, we propose (and prove) a stronger Multi-Prize Lottery Ticket Hypothesis: A sufficiently over-parameterized neural network with random weights contains several subnetworks (winning tickets) that (a) have comparable accuracy to a dense target network with learned weights (prize 1), (b) do not require any further training to achieve prize 1 (prize 2), and (c) is robust to extreme forms of quantization (i. e., binary weights and/or activation) (prize 3).

 Ranked #1 on Quantization on ImageNet (Top-1 metric)

Classification with Binary Neural Network Classification with Binary Weight Network +1

Robusta: Robust AutoML for Feature Selection via Reinforcement Learning

no code implementations15 Jan 2021 Xiaoyang Wang, Bo Li, Yibo Zhang, Bhavya Kailkhura, Klara Nahrstedt

However, these AutoML pipelines only focus on improving the learning accuracy of benign samples while ignoring the ML model robustness under adversarial attacks.

AutoML Feature Importance +1

Multi-Prize Lottery Ticket Hypothesis: Finding Generalizable and Efficient Binary Subnetworks in a Randomly Weighted Neural Network

no code implementations ICLR 2021 James Diffenderfer, Bhavya Kailkhura

A sufficiently over-parameterized neural network with random weights contains several subnetworks (winning tickets) that (a) have comparable accuracy to a dense target network with learned weights (prize 1), (b) do not require any further training to achieve prize 1 (prize 2), and (c) is robust to extreme forms of quantization (i. e., binary weights and/or activation) (prize 3).


Can Shape Structure Features Improve Model Robustness Under Diverse Adversarial Settings?

no code implementations ICCV 2021 MingJie Sun, Zichao Li, Chaowei Xiao, Haonan Qiu, Bhavya Kailkhura, Mingyan Liu, Bo Li

Specifically, EdgeNetRob and EdgeGANRob first explicitly extract shape structure features from a given image via an edge detection algorithm.

Edge Detection

Attribute-Guided Adversarial Training for Robustness to Natural Perturbations

3 code implementations3 Dec 2020 Tejas Gokhale, Rushil Anirudh, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Chitta Baral, Yezhou Yang

While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes.

How Robust are Randomized Smoothing based Defenses to Data Poisoning?

1 code implementation CVPR 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Moreover, our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods such as Gaussian data augmentation\cite{cohen2019certified}, MACER\cite{zhai2020macer}, and SmoothAdv\cite{salman2019provably} that achieve high certified adversarial robustness.

Adversarial Robustness bilevel optimization +2

Leveraging Uncertainty from Deep Learning for Trustworthy Materials Discovery Workflows

no code implementations2 Dec 2020 Jize Zhang, Bhavya Kailkhura, T. Yong-Jin Han

In this paper, we leverage predictive uncertainty of deep neural networks to answer challenging questions material scientists usually encounter in machine learning based materials applications workflows.

General Classification

A Statistical Mechanics Framework for Task-Agnostic Sample Design in Machine Learning

no code implementations NeurIPS 2020 Bhavya Kailkhura, Jayaraman J. Thiagarajan, Qunwei Li, Jize Zhang, Yi Zhou, Timo Bremer

Using this framework, we show that space-filling sample designs, such as blue noise and Poisson disk sampling, which optimize spectral properties, outperform random designs in terms of the generalization gap and characterize this gain in a closed-form.

FedCluster: Boosting the Convergence of Federated Learning via Cluster-Cycling

no code implementations22 Sep 2020 Cheng Chen, Ziyi Chen, Yi Zhou, Bhavya Kailkhura

We develop FedCluster--a novel federated learning framework with improved optimization efficiency, and investigate its theoretical convergence properties.

Federated Learning

Probabilistic Neighbourhood Component Analysis: Sample Efficient Uncertainty Estimation in Deep Learning

1 code implementation18 Jul 2020 Ankur Mallick, Chaitanya Dwivedi, Bhavya Kailkhura, Gauri Joshi, T. Yong-Jin Han

In this work, we show that the uncertainty estimation capability of state-of-the-art BNNs and Deep Ensemble models degrades significantly when the amount of training data is small.

COVID-19 Diagnosis

Explainable Deep Learning for Uncovering Actionable Scientific Insights for Materials Discovery and Design

no code implementations16 Jul 2020 Shusen Liu, Bhavya Kailkhura, Jize Zhang, Anna M. Hiszpanski, Emily Robertson, Donald Loveland, T. Yong-Jin Han

The scientific community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges.

Actionable Attribution Maps for Scientific Machine Learning

no code implementations30 Jun 2020 Shusen Liu, Bhavya Kailkhura, Jize Zhang, Anna M. Hiszpanski, Emily Robertson, Donald Loveland, T. Yong-Jin Han

The scientific community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges.

Adversarial Mutual Information for Text Generation

1 code implementation ICML 2020 Boyuan Pan, Yazheng Yang, Kaizhao Liang, Bhavya Kailkhura, Zhongming Jin, Xian-Sheng Hua, Deng Cai, Bo Li

Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation.

Text Generation

A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning

no code implementations11 Jun 2020 Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero, Pramod K. Varshney

Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many signal processing and machine learning applications.

Mix-n-Match: Ensemble and Compositional Methods for Uncertainty Calibration in Deep Learning

1 code implementation16 Mar 2020 Jize Zhang, Bhavya Kailkhura, T. Yong-Jin Han

We show that none of the existing methods satisfy all three requirements, and demonstrate how Mix-n-Match calibration strategies (i. e., ensemble and composition) can help achieve remarkably better data-efficiency and expressive power while provably maintaining the classification accuracy of the original classifier.

Small Data Image Classification

Anomalous Example Detection in Deep Learning: A Survey

no code implementations16 Mar 2020 Saikiran Bulusu, Bhavya Kailkhura, Bo Li, Pramod K. Varshney, Dawn Song

This survey tries to provide a structured and comprehensive overview of the research on anomaly detection for DL based applications.

Anomaly Detection

Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

5 code implementations NeurIPS 2020 Kaidi Xu, Zhouxing Shi, huan zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh

Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense.


TSS: Transformation-Specific Smoothing for Robustness Certification

1 code implementation27 Feb 2020 Linyi Li, Maurice Weber, Xiaojun Xu, Luka Rimanic, Bhavya Kailkhura, Tao Xie, Ce Zhang, Bo Li

Moreover, to the best of our knowledge, TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset.

Towards an Efficient and General Framework of Robust Training for Graph Neural Networks

no code implementations25 Feb 2020 Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya Kailkhura, Xue Lin

To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner.

MimicGAN: Robust Projection onto Image Manifolds with Corruption Mimicking

no code implementations16 Dec 2019 Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Timo Bremer

However, PGD is a brittle optimization technique that fails to identify the right projection (or latent vector) when the observation is corrupted, or perturbed even by a small amount.

Adversarial Defense Anomaly Detection +2

Enabling Machine Learning-Ready HPC Ensembles with Merlin

no code implementations5 Dec 2019 J. Luc Peterson, Ben Bay, Joe Koning, Peter Robinson, Jessica Semler, Jeremy White, Rushil Anirudh, Kevin Athey, Peer-Timo Bremer, Francesco Di Natale, David Fox, Jim A. Gaffney, Sam A. Jacobs, Bhavya Kailkhura, Bogdan Kustowski, Steven Langer, Brian Spears, Jayaraman Thiagarajan, Brian Van Essen, Jae-Seung Yeom

With the growing complexity of computational and experimental facilities, many scientific researchers are turning to machine learning (ML) techniques to analyze large scale ensemble data.

Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification?

no code implementations CVPR 2021 Ruoxi Jia, Fan Wu, Xuehui Sun, Jiacen Xu, David Dao, Bhavya Kailkhura, Ce Zhang, Bo Li, Dawn Song

Quantifying the importance of each training point to a learning task is a fundamental problem in machine learning and the estimated importance scores have been leveraged to guide a range of data workflows such as data summarization and domain adaption.

Data Summarization Domain Adaptation

Deep Kernels with Probabilistic Embeddings for Small-Data Learning

1 code implementation13 Oct 2019 Ankur Mallick, Chaitanya Dwivedi, Bhavya Kailkhura, Gauri Joshi, T. Yong-Jin Han

Experiments on a variety of datasets show that our approach outperforms the state-of-the-art in GP kernel learning in both supervised and semi-supervised settings.

Gaussian Processes Representation Learning +1

On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method

1 code implementation ICCV 2019 Pu Zhao, Sijia Liu, Pin-Yu Chen, Nghia Hoang, Kaidi Xu, Bhavya Kailkhura, Xue Lin

Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations.

Adversarial Attack Image Classification

Generative Counterfactual Introspection for Explainable Deep Learning

no code implementations6 Jul 2019 Shusen Liu, Bhavya Kailkhura, Donald Loveland, Yong Han

In this work, we propose an introspection technique for deep neural networks that relies on a generative model to instigate salient editing of the input image for model interpretation.

A Look at the Effect of Sample Design on Generalization through the Lens of Spectral Analysis

no code implementations6 Jun 2019 Bhavya Kailkhura, Jayaraman J. Thiagarajan, Qunwei Li, Peer-Timo Bremer

This paper provides a general framework to study the effect of sampling properties of training data on the generalization error of the learned machine learning (ML) models.

Reliable and Explainable Machine Learning Methods for Accelerated Material Discovery

no code implementations5 Jan 2019 Bhavya Kailkhura, Brian Gallagher, Sookyung Kim, Anna Hiszpanski, T. Yong-Jin Han

We also propose a transfer learning technique and show that the performance loss due to models' simplicity can be overcome by exploiting correlations among different material properties.

Transfer Learning

MR-GAN: Manifold Regularized Generative Adversarial Networks

no code implementations22 Nov 2018 Qunwei Li, Bhavya Kailkhura, Rushil Anirudh, Yi Zhou, Yingbin Liang, Pramod Varshney

Despite the growing interest in generative adversarial networks (GANs), training GANs remains a challenging problem, both from a theoretical and a practical standpoint.

Universal Decision-Based Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses

no code implementations9 Nov 2018 Thomas A. Hogan, Bhavya Kailkhura

We study the problem of finding a universal (image-agnostic) perturbation to fool machine learning (ML) classifiers (e. g., neural nets, decision tress) in the hard-label black-box setting.

Coverage-Based Designs Improve Sample Mining and Hyper-Parameter Optimization

1 code implementation5 Sep 2018 Gowtham Muniraju, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Peer-Timo Bremer, Cihan Tepedelenlioglu, Andreas Spanias

Sampling one or more effective solutions from large search spaces is a recurring idea in machine learning, and sequential optimization has become a popular solution.

Data Summarization

Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization

1 code implementation NeurIPS 2018 Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Pai-Shun Ting, Shiyu Chang, Lisa Amini

As application demands for zeroth-order (gradient-free) optimization accelerate, the need for variance reduced and faster converging approaches is also intensifying.

Material Classification Stochastic Optimization

Human-Machine Inference Networks For Smart Decision Making: Opportunities and Challenges

no code implementations29 Jan 2018 Aditya Vempaty, Bhavya Kailkhura, Pramod K. Varshney

The emerging paradigm of Human-Machine Inference Networks (HuMaINs) combines complementary cognitive strengths of humans and machines in an intelligent manner to tackle various inference tasks and achieves higher performance than either humans or machines by themselves.

Decision Making

A Spectral Approach for the Design of Experiments: Design, Analysis and Algorithms

no code implementations16 Dec 2017 Bhavya Kailkhura, Jayaraman J. Thiagarajan, Charvi Rastogi, Pramod K. Varshney, Peer-Timo Bremer

Third, we propose an efficient estimator to evaluate the space-filling properties of sample designs in arbitrary dimensions and use it to develop an optimization framework to generate high quality space-filling designs.

Image Reconstruction

Robust Decentralized Learning Using ADMM with Unreliable Agents

no code implementations14 Oct 2017 Qunwei Li, Bhavya Kailkhura, Ryan Goldhahn, Priyadip Ray, Pramod K. Varshney

We also provide conditions on the erroneous updates for exact convergence to the optimal solution.

Robust Local Scaling using Conditional Quantiles of Graph Similarities

no code implementations14 Dec 2016 Jayaraman J. Thiagarajan, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Bhavya Kailkhura

In this paper, we propose the use of quantile analysis to obtain local scale estimates for neighborhood graph construction.

graph construction

TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning

no code implementations22 Nov 2016 Jayaraman J. Thiagarajan, Bhavya Kailkhura, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy

In this paper, we take a step in the direction of tackling the problem of interpretability without compromising the model accuracy.

Universal Collaboration Strategies for Signal Detection: A Sparse Learning Approach

no code implementations22 Jan 2016 Prashant Khanduri, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Pramod K. Varshney

This paper considers the problem of high dimensional signal detection in a large distributed network whose nodes can collaborate with their one-hop neighboring nodes (spatial collaboration).

Sparse Learning

Consensus based Detection in the Presence of Data Falsification Attacks

no code implementations14 Apr 2015 Bhavya Kailkhura, Swastik Brahma, Pramod K. Varshney

This paper considers the problem of detection in distributed networks in the presence of data falsification (Byzantine) attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.