Search Results for author: Bhavya Kailkhura

Found 79 papers, 29 papers with code

Introducing v0.5 of the AI Safety Benchmark from MLCommons

1 code implementation18 Apr 2024 Bertie Vidgen, Adarsh Agrawal, Ahmed M. Ahmed, Victor Akinwande, Namir Al-Nuaimi, Najla Alfaraj, Elie Alhajjar, Lora Aroyo, Trupti Bavalatti, Borhane Blili-Hamelin, Kurt Bollacker, Rishi Bomassani, Marisa Ferrara Boston, Siméon Campos, Kal Chakra, Canyu Chen, Cody Coleman, Zacharie Delpierre Coudert, Leon Derczynski, Debojyoti Dutta, Ian Eisenberg, James Ezick, Heather Frase, Brian Fuller, Ram Gandikota, Agasthya Gangavarapu, Ananya Gangavarapu, James Gealy, Rajat Ghosh, James Goel, Usman Gohar, Sujata Goswami, Scott A. Hale, Wiebke Hutiri, Joseph Marvin Imperial, Surgan Jandial, Nick Judd, Felix Juefei-Xu, Foutse khomh, Bhavya Kailkhura, Hannah Rose Kirk, Kevin Klyman, Chris Knotz, Michael Kuchnik, Shachi H. Kumar, Chris Lengerich, Bo Li, Zeyi Liao, Eileen Peters Long, Victor Lu, Yifan Mai, Priyanka Mary Mammen, Kelvin Manyeki, Sean McGregor, Virendra Mehta, Shafee Mohammed, Emanuel Moss, Lama Nachman, Dinesh Jinenhally Naganna, Amin Nikanjam, Besmira Nushi, Luis Oala, Iftach Orr, Alicia Parrish, Cigdem Patlak, William Pietri, Forough Poursabzi-Sangdeh, Eleonora Presani, Fabrizio Puletti, Paul Röttger, Saurav Sahay, Tim Santos, Nino Scherrer, Alice Schoenauer Sebag, Patrick Schramowski, Abolfazl Shahbazi, Vin Sharma, Xudong Shen, Vamsi Sistla, Leonard Tang, Davide Testuggine, Vithursan Thangarasa, Elizabeth Anne Watkins, Rebecca Weiss, Chris Welty, Tyler Wilbers, Adina Williams, Carole-Jean Wu, Poonam Yadav, Xianjun Yang, Yi Zeng, Wenhui Zhang, Fedor Zhdanov, Jiacheng Zhu, Percy Liang, Peter Mattson, Joaquin Vanschoren

We created a new taxonomy of 13 hazard categories, of which 7 have tests in the v0. 5 benchmark.

End-to-End Mesh Optimization of a Hybrid Deep Learning Black-Box PDE Solver

no code implementations17 Apr 2024 Shaocong Ma, James Diffenderfer, Bhavya Kailkhura, Yi Zhou

In this study, we explore the feasibility of end-to-end training of a hybrid model with a black-box PDE solver and a deep learning model for fluid flow prediction.

Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies

no code implementations14 Apr 2024 Brian R. Bartoldson, James Diffenderfer, Konstantinos Parasyris, Bhavya Kailkhura

However, our scaling laws also predict robustness slowly grows then plateaus at $90$%: dwarfing our new SOTA by scaling is impractical, and perfect robustness is impossible.

Adversarial Robustness

Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression

no code implementations18 Mar 2024 Junyuan Hong, Jinhao Duan, Chenhui Zhang, Zhangheng Li, Chulin Xie, Kelsey Lieberman, James Diffenderfer, Brian Bartoldson, Ajay Jaiswal, Kaidi Xu, Bhavya Kailkhura, Dan Hendrycks, Dawn Song, Zhangyang Wang, Bo Li

While state-of-the-art (SoTA) compression methods boast impressive advancements in preserving benign task performance, the potential risks of compression in terms of safety and trustworthiness have been largely neglected.

Ethics Fairness +1

GTBench: Uncovering the Strategic Reasoning Limitations of LLMs via Game-Theoretic Evaluations

1 code implementation19 Feb 2024 Jinhao Duan, Renming Zhang, James Diffenderfer, Bhavya Kailkhura, Lichao Sun, Elias Stengel-Eskin, Mohit Bansal, Tianlong Chen, Kaidi Xu

As Large Language Models (LLMs) are integrated into critical real-world applications, their strategic and logical reasoning abilities are increasingly crucial.

Card Games Logical Reasoning

Scaling Compute Is Not All You Need for Adversarial Robustness

no code implementations20 Dec 2023 Edoardo Debenedetti, Zishen Wan, Maksym Andriushchenko, Vikash Sehwag, Kshitij Bhardwaj, Bhavya Kailkhura

Finally, we make our benchmarking framework (built on top of \texttt{timm}~\citep{rw2019timm}) publicly available to facilitate future analysis in efficient robust deep learning.

Adversarial Robustness Benchmarking

When Bio-Inspired Computing meets Deep Learning: Low-Latency, Accurate, & Energy-Efficient Spiking Neural Networks from Artificial Neural Networks

no code implementations12 Dec 2023 Gourav Datta, Zeyu Liu, James Diffenderfer, Bhavya Kailkhura, Peter A. Beerel

However, advanced ANN-to-SNN conversion approaches demonstrate that for lossless conversion, the number of SNN time steps must equal the number of quantization steps in the ANN activation function.

Quantization

Pursing the Sparse Limitation of Spiking Deep Learning Structures

no code implementations18 Nov 2023 Hao Cheng, Jiahang Cao, Erjia Xiao, Mengshu Sun, Le Yang, Jize Zhang, Xue Lin, Bhavya Kailkhura, Kaidi Xu, Renjing Xu

It posits that within dense neural networks, there exist winning tickets or subnetworks that are sparser but do not compromise performance.

Leveraging Hierarchical Feature Sharing for Efficient Dataset Condensation

no code implementations11 Oct 2023 Haizhong Zheng, Jiachen Sun, Shutong Wu, Bhavya Kailkhura, Zhuoqing Mao, Chaowei Xiao, Atul Prakash

In this paper, we recognize that images share common features in a hierarchical way due to the inherent hierarchical structure of the classification system, which is overlooked by current data parameterization methods.

Dataset Condensation

DeepZero: Scaling up Zeroth-Order Optimization for Deep Model Training

1 code implementation3 Oct 2023 Aochuan Chen, Yimeng Zhang, Jinghan Jia, James Diffenderfer, Jiancheng Liu, Konstantinos Parasyris, Yihua Zhang, Zheng Zhang, Bhavya Kailkhura, Sijia Liu

Our extensive experiments show that DeepZero achieves state-of-the-art (SOTA) accuracy on ResNet-20 trained on CIFAR-10, approaching FO training performance for the first time.

Adversarial Defense Computational Efficiency +1

On the Fly Neural Style Smoothing for Risk-Averse Domain Generalization

1 code implementation17 Jul 2023 Akshay Mehra, Yunbei Zhang, Bhavya Kailkhura, Jihun Hamm

To enable risk-averse predictions from a DG classifier, we propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that uses a "style-smoothed" version of the DG classifier for prediction at test time.

Autonomous Driving Domain Generalization +1

Shifting Attention to Relevance: Towards the Uncertainty Estimation of Large Language Models

1 code implementation3 Jul 2023 Jinhao Duan, Hao Cheng, Shiqi Wang, Alex Zavalny, Chenan Wang, Renjing Xu, Bhavya Kailkhura, Kaidi Xu

While Large Language Models (LLMs) have demonstrated remarkable potential in natural language generation and instruction following, a persistent challenge lies in their susceptibility to "hallucinations", which erodes trust in their outputs.

Instruction Following Question Answering +4

Less is More: Data Pruning for Faster Adversarial Training

no code implementations23 Feb 2023 Yize Li, Pu Zhao, Xue Lin, Bhavya Kailkhura, Ryan Goldhahn

Deep neural networks (DNNs) are sensitive to adversarial examples, resulting in fragile and unreliable performance in the real world.

Compute-Efficient Deep Learning: Algorithmic Trends and Opportunities

no code implementations13 Oct 2022 Brian R. Bartoldson, Bhavya Kailkhura, Davis Blalock

To address this problem, there has been a great deal of research on *algorithmically-efficient deep learning*, which seeks to reduce training costs not at the hardware or implementation level, but through changes in the semantics of the training program.

Efficient Multi-Prize Lottery Tickets: Enhanced Accuracy, Training, and Inference Speed

no code implementations26 Sep 2022 Hao Cheng, Pu Zhao, Yize Li, Xue Lin, James Diffenderfer, Ryan Goldhahn, Bhavya Kailkhura

Recently, Diffenderfer and Kailkhura proposed a new paradigm for learning compact yet highly accurate binary neural networks simply by pruning and quantizing randomly weighted full precision neural networks.

Models Out of Line: A Fourier Lens on Distribution Shift Robustness

no code implementations8 Jul 2022 Sara Fridovich-Keil, Brian R. Bartoldson, James Diffenderfer, Bhavya Kailkhura, Peer-Timo Bremer

However, there still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness.

Data Augmentation

On Certifying and Improving Generalization to Unseen Domains

1 code implementation24 Jun 2022 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

This highlights that the performance of DG methods on a few benchmark datasets may not be representative of their performance on unseen domains in the wild.

Domain Generalization

Improving Diversity with Adversarially Learned Transformations for Domain Generalization

1 code implementation15 Jun 2022 Tejas Gokhale, Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Chitta Baral, Yezhou Yang

To be successful in single source domain generalization, maximizing diversity of synthesized domains has emerged as one of the most effective strategies.

Domain Generalization

Zeroth-Order SciML: Non-intrusive Integration of Scientific Software with Deep Learning

no code implementations4 Jun 2022 Ioannis Tsaknakis, Bhavya Kailkhura, Sijia Liu, Donald Loveland, James Diffenderfer, Anna Maria Hiszpanski, Mingyi Hong

Existing knowledge integration approaches are limited to using differentiable knowledge source to be compatible with first-order DL training paradigm.

Representing Polymers as Periodic Graphs with Learned Descriptors for Accurate Polymer Property Predictions

no code implementations27 May 2022 Evan R. Antoniuk, Peggy Li, Bhavya Kailkhura, Anna M. Hiszpanski

Our results illustrate how the incorporation of chemical intuition through directly encoding periodicity into our polymer graph representation leads to a considerable improvement in the accuracy and reliability of polymer property predictions.

A Fast and Convergent Proximal Algorithm for Regularized Nonconvex and Nonsmooth Bi-level Optimization

no code implementations30 Mar 2022 Ziyi Chen, Bhavya Kailkhura, Yi Zhou

In this work, we study a proximal gradient-type algorithm that adopts the approximate implicit differentiation (AID) scheme for nonconvex bi-level optimization with possibly nonconvex and nonsmooth regularizers.

COPA: Certifying Robust Policies for Offline Reinforcement Learning against Poisoning Attacks

1 code implementation ICLR 2022 Fan Wu, Linyi Li, Chejian Xu, huan zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, Bo Li

We leverage COPA to certify three RL environments trained with different algorithms and conclude: (1) The proposed robust aggregation protocols such as temporal aggregation can significantly improve the certifications; (2) Our certification for both per-state action stability and cumulative reward bound are efficient and tight; (3) The certification for different training algorithms and environments are different, implying their intrinsic robustness properties.

Offline RL reinforcement-learning +1

Certified Adversarial Defenses Meet Out-of-Distribution Corruptions: Benchmarking Robustness and Simple Baselines

no code implementations1 Dec 2021 Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, Z. Morley Mao

To alleviate this issue, we propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.

Adversarial Robustness Benchmarking +1

I-PGD-AT: Efficient Adversarial Training via Imitating Iterative PGD Attack

no code implementations29 Sep 2021 Xiaosen Wang, Bhavya Kailkhura, Krishnaram Kenthapadi, Bo Li

Finally, to demonstrate the generality of I-PGD-AT, we integrate it into PGD adversarial training and show that it can even further improve the robustness.

On the Certified Robustness for Ensemble Models and Beyond

no code implementations ICLR 2022 Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, Bo Li

Thus, to explore the conditions that guarantee to provide certifiably robust ensemble ML models, we first prove that diversified gradient and large confidence margin are sufficient and necessary conditions for certifiably robust ensemble models under the model-smoothness assumption.

Understanding the Limits of Unsupervised Domain Adaptation via Data Poisoning

1 code implementation NeurIPS 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target.

Data Poisoning Domain Generalization +1

Reliable Graph Neural Network Explanations Through Adversarial Training

no code implementations25 Jun 2021 Donald Loveland, Shusen Liu, Bhavya Kailkhura, Anna Hiszpanski, Yong Han

Graph neural network (GNN) explanations have largely been facilitated through post-hoc introspection.

Mixture of Robust Experts (MoRE):A Robust Denoising Method towards multiple perturbations

no code implementations21 Apr 2021 Kaidi Xu, Chenan Wang, Hao Cheng, Bhavya Kailkhura, Xue Lin, Ryan Goldhahn

To tackle the susceptibility of deep neural networks to examples, the adversarial training has been proposed which provides a notion of robust through an inner maximization problem presenting the first-order embedded within the outer minimization of the training loss.

Adversarial Robustness Denoising

Certifiably-Robust Federated Adversarial Learning via Randomized Smoothing

no code implementations30 Mar 2021 Cheng Chen, Bhavya Kailkhura, Ryan Goldhahn, Yi Zhou

Federated learning is an emerging data-private distributed learning framework, which, however, is vulnerable to adversarial attacks.

Federated Learning

Multi-Prize Lottery Ticket Hypothesis: Finding Accurate Binary Neural Networks by Pruning A Randomly Weighted Network

1 code implementation17 Mar 2021 James Diffenderfer, Bhavya Kailkhura

In this paper, we propose (and prove) a stronger Multi-Prize Lottery Ticket Hypothesis: A sufficiently over-parameterized neural network with random weights contains several subnetworks (winning tickets) that (a) have comparable accuracy to a dense target network with learned weights (prize 1), (b) do not require any further training to achieve prize 1 (prize 2), and (c) is robust to extreme forms of quantization (i. e., binary weights and/or activation) (prize 3).

Classification with Binary Neural Network Classification with Binary Weight Network +1

Robusta: Robust AutoML for Feature Selection via Reinforcement Learning

no code implementations15 Jan 2021 Xiaoyang Wang, Bo Li, Yibo Zhang, Bhavya Kailkhura, Klara Nahrstedt

However, these AutoML pipelines only focus on improving the learning accuracy of benign samples while ignoring the ML model robustness under adversarial attacks.

AutoML Feature Importance +3

Multi-Prize Lottery Ticket Hypothesis: Finding Generalizable and Efficient Binary Subnetworks in a Randomly Weighted Neural Network

no code implementations ICLR 2021 James Diffenderfer, Bhavya Kailkhura

A sufficiently over-parameterized neural network with random weights contains several subnetworks (winning tickets) that (a) have comparable accuracy to a dense target network with learned weights (prize 1), (b) do not require any further training to achieve prize 1 (prize 2), and (c) is robust to extreme forms of quantization (i. e., binary weights and/or activation) (prize 3).

Quantization

Attribute-Guided Adversarial Training for Robustness to Natural Perturbations

3 code implementations3 Dec 2020 Tejas Gokhale, Rushil Anirudh, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Chitta Baral, Yezhou Yang

While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes.

Attribute

Leveraging Uncertainty from Deep Learning for Trustworthy Materials Discovery Workflows

no code implementations2 Dec 2020 Jize Zhang, Bhavya Kailkhura, T. Yong-Jin Han

In this paper, we leverage predictive uncertainty of deep neural networks to answer challenging questions material scientists usually encounter in machine learning based materials applications workflows.

General Classification

How Robust are Randomized Smoothing based Defenses to Data Poisoning?

1 code implementation CVPR 2021 Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm

Moreover, our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods such as Gaussian data augmentation\cite{cohen2019certified}, MACER\cite{zhai2020macer}, and SmoothAdv\cite{salman2019provably} that achieve high certified adversarial robustness.

Adversarial Robustness Bilevel Optimization +2

A Statistical Mechanics Framework for Task-Agnostic Sample Design in Machine Learning

no code implementations NeurIPS 2020 Bhavya Kailkhura, Jayaraman J. Thiagarajan, Qunwei Li, Jize Zhang, Yi Zhou, Timo Bremer

Using this framework, we show that space-filling sample designs, such as blue noise and Poisson disk sampling, which optimize spectral properties, outperform random designs in terms of the generalization gap and characterize this gain in a closed-form.

BIG-bench Machine Learning

FedCluster: Boosting the Convergence of Federated Learning via Cluster-Cycling

no code implementations22 Sep 2020 Cheng Chen, Ziyi Chen, Yi Zhou, Bhavya Kailkhura

We develop FedCluster--a novel federated learning framework with improved optimization efficiency, and investigate its theoretical convergence properties.

Federated Learning

Probabilistic Neighbourhood Component Analysis: Sample Efficient Uncertainty Estimation in Deep Learning

1 code implementation18 Jul 2020 Ankur Mallick, Chaitanya Dwivedi, Bhavya Kailkhura, Gauri Joshi, T. Yong-Jin Han

In this work, we show that the uncertainty estimation capability of state-of-the-art BNNs and Deep Ensemble models degrades significantly when the amount of training data is small.

COVID-19 Diagnosis Uncertainty Quantification

Explainable Deep Learning for Uncovering Actionable Scientific Insights for Materials Discovery and Design

no code implementations16 Jul 2020 Shusen Liu, Bhavya Kailkhura, Jize Zhang, Anna M. Hiszpanski, Emily Robertson, Donald Loveland, T. Yong-Jin Han

The scientific community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges.

Actionable Attribution Maps for Scientific Machine Learning

no code implementations30 Jun 2020 Shusen Liu, Bhavya Kailkhura, Jize Zhang, Anna M. Hiszpanski, Emily Robertson, Donald Loveland, T. Yong-Jin Han

The scientific community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges.

BIG-bench Machine Learning

Adversarial Mutual Information for Text Generation

1 code implementation ICML 2020 Boyuan Pan, Yazheng Yang, Kaizhao Liang, Bhavya Kailkhura, Zhongming Jin, Xian-Sheng Hua, Deng Cai, Bo Li

Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation.

Text Generation

A Primer on Zeroth-Order Optimization in Signal Processing and Machine Learning

no code implementations11 Jun 2020 Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero, Pramod K. Varshney

Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many signal processing and machine learning applications.

BIG-bench Machine Learning Management

Mix-n-Match: Ensemble and Compositional Methods for Uncertainty Calibration in Deep Learning

1 code implementation16 Mar 2020 Jize Zhang, Bhavya Kailkhura, T. Yong-Jin Han

We show that none of the existing methods satisfy all three requirements, and demonstrate how Mix-n-Match calibration strategies (i. e., ensemble and composition) can help achieve remarkably better data-efficiency and expressive power while provably maintaining the classification accuracy of the original classifier.

Small Data Image Classification

Anomalous Example Detection in Deep Learning: A Survey

no code implementations16 Mar 2020 Saikiran Bulusu, Bhavya Kailkhura, Bo Li, Pramod K. Varshney, Dawn Song

This survey tries to provide a structured and comprehensive overview of the research on anomaly detection for DL based applications.

Anomaly Detection

Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond

5 code implementations NeurIPS 2020 Kaidi Xu, Zhouxing Shi, huan zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh

Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense.

Quantization

TSS: Transformation-Specific Smoothing for Robustness Certification

1 code implementation27 Feb 2020 Linyi Li, Maurice Weber, Xiaojun Xu, Luka Rimanic, Bhavya Kailkhura, Tao Xie, Ce Zhang, Bo Li

Moreover, to the best of our knowledge, TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset.

Towards an Efficient and General Framework of Robust Training for Graph Neural Networks

no code implementations25 Feb 2020 Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya Kailkhura, Xue Lin

To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner.

MimicGAN: Robust Projection onto Image Manifolds with Corruption Mimicking

no code implementations16 Dec 2019 Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Timo Bremer

However, PGD is a brittle optimization technique that fails to identify the right projection (or latent vector) when the observation is corrupted, or perturbed even by a small amount.

Adversarial Defense Anomaly Detection +2

Scalability vs. Utility: Do We Have to Sacrifice One for the Other in Data Importance Quantification?

1 code implementation CVPR 2021 Ruoxi Jia, Fan Wu, Xuehui Sun, Jiacen Xu, David Dao, Bhavya Kailkhura, Ce Zhang, Bo Li, Dawn Song

Quantifying the importance of each training point to a learning task is a fundamental problem in machine learning and the estimated importance scores have been leveraged to guide a range of data workflows such as data summarization and domain adaption.

Data Summarization Domain Adaptation

Deep Kernels with Probabilistic Embeddings for Small-Data Learning

1 code implementation13 Oct 2019 Ankur Mallick, Chaitanya Dwivedi, Bhavya Kailkhura, Gauri Joshi, T. Yong-Jin Han

Experiments on a variety of datasets show that our approach outperforms the state-of-the-art in GP kernel learning in both supervised and semi-supervised settings.

Gaussian Processes Representation Learning +1

On the Design of Black-box Adversarial Examples by Leveraging Gradient-free Optimization and Operator Splitting Method

1 code implementation ICCV 2019 Pu Zhao, Sijia Liu, Pin-Yu Chen, Nghia Hoang, Kaidi Xu, Bhavya Kailkhura, Xue Lin

Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations.

Adversarial Attack Bayesian Optimization +1

Generative Counterfactual Introspection for Explainable Deep Learning

no code implementations6 Jul 2019 Shusen Liu, Bhavya Kailkhura, Donald Loveland, Yong Han

In this work, we propose an introspection technique for deep neural networks that relies on a generative model to instigate salient editing of the input image for model interpretation.

counterfactual

G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators

2 code implementations NeurIPS 2021 Yunhui Long, Boxin Wang, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl A. Gunter, Bo Li

In particular, we train a student data generator with an ensemble of teacher discriminators and propose a novel private gradient aggregation mechanism to ensure differential privacy on all information that flows from teacher discriminators to the student generator.

BIG-bench Machine Learning Privacy Preserving

A Look at the Effect of Sample Design on Generalization through the Lens of Spectral Analysis

no code implementations6 Jun 2019 Bhavya Kailkhura, Jayaraman J. Thiagarajan, Qunwei Li, Peer-Timo Bremer

This paper provides a general framework to study the effect of sampling properties of training data on the generalization error of the learned machine learning (ML) models.

valid

Reliable and Explainable Machine Learning Methods for Accelerated Material Discovery

no code implementations5 Jan 2019 Bhavya Kailkhura, Brian Gallagher, Sookyung Kim, Anna Hiszpanski, T. Yong-Jin Han

We also propose a transfer learning technique and show that the performance loss due to models' simplicity can be overcome by exploiting correlations among different material properties.

BIG-bench Machine Learning Transfer Learning

MR-GAN: Manifold Regularized Generative Adversarial Networks

no code implementations22 Nov 2018 Qunwei Li, Bhavya Kailkhura, Rushil Anirudh, Yi Zhou, Yingbin Liang, Pramod Varshney

Despite the growing interest in generative adversarial networks (GANs), training GANs remains a challenging problem, both from a theoretical and a practical standpoint.

Universal Decision-Based Black-Box Perturbations: Breaking Security-Through-Obscurity Defenses

no code implementations9 Nov 2018 Thomas A. Hogan, Bhavya Kailkhura

We study the problem of finding a universal (image-agnostic) perturbation to fool machine learning (ML) classifiers (e. g., neural nets, decision tress) in the hard-label black-box setting.

Coverage-Based Designs Improve Sample Mining and Hyper-Parameter Optimization

1 code implementation5 Sep 2018 Gowtham Muniraju, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Peer-Timo Bremer, Cihan Tepedelenlioglu, Andreas Spanias

Sampling one or more effective solutions from large search spaces is a recurring idea in machine learning, and sequential optimization has become a popular solution.

Bayesian Optimization Data Summarization

Zeroth-Order Stochastic Variance Reduction for Nonconvex Optimization

1 code implementation NeurIPS 2018 Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Pai-Shun Ting, Shiyu Chang, Lisa Amini

As application demands for zeroth-order (gradient-free) optimization accelerate, the need for variance reduced and faster converging approaches is also intensifying.

Material Classification Stochastic Optimization

Human-Machine Inference Networks For Smart Decision Making: Opportunities and Challenges

no code implementations29 Jan 2018 Aditya Vempaty, Bhavya Kailkhura, Pramod K. Varshney

The emerging paradigm of Human-Machine Inference Networks (HuMaINs) combines complementary cognitive strengths of humans and machines in an intelligent manner to tackle various inference tasks and achieves higher performance than either humans or machines by themselves.

BIG-bench Machine Learning Decision Making

A Spectral Approach for the Design of Experiments: Design, Analysis and Algorithms

no code implementations16 Dec 2017 Bhavya Kailkhura, Jayaraman J. Thiagarajan, Charvi Rastogi, Pramod K. Varshney, Peer-Timo Bremer

Third, we propose an efficient estimator to evaluate the space-filling properties of sample designs in arbitrary dimensions and use it to develop an optimization framework to generate high quality space-filling designs.

Image Reconstruction

TreeView: Peeking into Deep Neural Networks Via Feature-Space Partitioning

no code implementations22 Nov 2016 Jayaraman J. Thiagarajan, Bhavya Kailkhura, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy

In this paper, we take a step in the direction of tackling the problem of interpretability without compromising the model accuracy.

Universal Collaboration Strategies for Signal Detection: A Sparse Learning Approach

no code implementations22 Jan 2016 Prashant Khanduri, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Pramod K. Varshney

This paper considers the problem of high dimensional signal detection in a large distributed network whose nodes can collaborate with their one-hop neighboring nodes (spatial collaboration).

Sparse Learning

Consensus based Detection in the Presence of Data Falsification Attacks

no code implementations14 Apr 2015 Bhavya Kailkhura, Swastik Brahma, Pramod K. Varshney

This paper considers the problem of detection in distributed networks in the presence of data falsification (Byzantine) attacks.

Cannot find the paper you are looking for? You can Submit a new open access paper.