1 code implementation • ICML 2020 • Jize Zhang, Bhavya Kailkhura, T. Yong-Jin Han
This paper studies the problem of post-hoc calibration of machine learning classifiers.
1 code implementation • 17 Jul 2023 • Akshay Mehra, Yunbei Zhang, Bhavya Kailkhura, Jihun Hamm
To enable risk-averse predictions from a DG classifier, we propose a novel inference procedure, Test-Time Neural Style Smoothing (TT-NSS), that uses a "style-smoothed" version of the DG classifier for prediction at test time.
no code implementations • 17 Jul 2023 • Kelsey Lieberman, James Diffenderfer, Charles Godfrey, Bhavya Kailkhura
Our benchmarks, spectral inspection tools, and findings provide a crucial bridge to the real-world adoption of NIC.
1 code implementation • 3 Jul 2023 • Jinhao Duan, Hao Cheng, Shiqi Wang, Chenan Wang, Alex Zavalny, Renjing Xu, Bhavya Kailkhura, Kaidi Xu
Our research is derived from the heuristic facts that tokens are created unequally in reflecting the meaning of generations by auto-regressive LLMs, i. e., some tokens are more relevant (or representative) than others, yet all the tokens are equally valued when estimating uncertainty.
no code implementations • 23 Feb 2023 • Yize Li, Pu Zhao, Xue Lin, Bhavya Kailkhura, Ryan Goldhahn
Deep neural networks (DNNs) are sensitive to adversarial examples, resulting in fragile and unreliable performance in the real world.
no code implementations • 13 Oct 2022 • Brian R. Bartoldson, Bhavya Kailkhura, Davis Blalock
To address this problem, there has been a great deal of research on *algorithmically-efficient deep learning*, which seeks to reduce training costs not at the hardware or implementation level, but through changes in the semantics of the training program.
no code implementations • 26 Sep 2022 • Hao Cheng, Pu Zhao, Yize Li, Xue Lin, James Diffenderfer, Ryan Goldhahn, Bhavya Kailkhura
Recently, Diffenderfer and Kailkhura proposed a new paradigm for learning compact yet highly accurate binary neural networks simply by pruning and quantizing randomly weighted full precision neural networks.
no code implementations • 8 Jul 2022 • Sara Fridovich-Keil, Brian R. Bartoldson, James Diffenderfer, Bhavya Kailkhura, Peer-Timo Bremer
However, there still is no clear understanding of the conditions on OOD data and model properties that are required to observe effective robustness.
1 code implementation • 24 Jun 2022 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
This highlights that the performance of DG methods on a few benchmark datasets may not be representative of their performance on unseen domains in the wild.
1 code implementation • 15 Jun 2022 • Tejas Gokhale, Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Chitta Baral, Yezhou Yang
To be successful in single source domain generalization, maximizing diversity of synthesized domains has emerged as one of the most effective strategies.
no code implementations • 4 Jun 2022 • Ioannis Tsaknakis, Bhavya Kailkhura, Sijia Liu, Donald Loveland, James Diffenderfer, Anna Maria Hiszpanski, Mingyi Hong
Existing knowledge integration approaches are limited to using differentiable knowledge source to be compatible with first-order DL training paradigm.
no code implementations • 27 May 2022 • Evan R. Antoniuk, Peggy Li, Bhavya Kailkhura, Anna M. Hiszpanski
Our results illustrate how the incorporation of chemical intuition through directly encoding periodicity into our polymer graph representation leads to a considerable improvement in the accuracy and reliability of polymer property predictions.
no code implementations • 30 Mar 2022 • Ziyi Chen, Bhavya Kailkhura, Yi Zhou
In this work, we study a proximal gradient-type algorithm that adopts the approximate implicit differentiation (AID) scheme for nonconvex bi-level optimization with possibly nonconvex and nonsmooth regularizers.
no code implementations • 21 Mar 2022 • Kshitij Bhardwaj, James Diffenderfer, Bhavya Kailkhura, Maya Gokhale
To improve robustness of DNNs, they must be able to update themselves to enhance their prediction accuracy.
1 code implementation • ICLR 2022 • Fan Wu, Linyi Li, Chejian Xu, huan zhang, Bhavya Kailkhura, Krishnaram Kenthapadi, Ding Zhao, Bo Li
We leverage COPA to certify three RL environments trained with different algorithms and conclude: (1) The proposed robust aggregation protocols such as temporal aggregation can significantly improve the certifications; (2) Our certification for both per-state action stability and cumulative reward bound are efficient and tight; (3) The certification for different training algorithms and environments are different, implying their intrinsic robustness properties.
5 code implementations • 28 Jan 2022 • Jiachen Sun, Qingzhao Zhang, Bhavya Kailkhura, Zhiding Yu, Chaowei Xiao, Z. Morley Mao
Deep neural networks on 3D point cloud data have been widely used in the real world, especially in safety-critical applications.
Ranked #1 on
3D Point Cloud Data Augmentation
on ModelNet40-C
3D Point Cloud Classification
3D Point Cloud Data Augmentation
+1
no code implementations • 1 Dec 2021 • Jiachen Sun, Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Dan Hendrycks, Jihun Hamm, Z. Morley Mao
To alleviate this issue, we propose a novel data augmentation scheme, FourierMix, that produces augmentations to improve the spectral coverage of the training data.
no code implementations • 29 Sep 2021 • Xiaosen Wang, Bhavya Kailkhura, Krishnaram Kenthapadi, Bo Li
Finally, to demonstrate the generality of I-PGD-AT, we integrate it into PGD adversarial training and show that it can even further improve the robustness.
no code implementations • 29 Sep 2021 • Cheng Chen, Jiaying Zhou, Jie Ding, Yi Zhou, Bhavya Kailkhura
We develop an assisted learning framework for assisting organization-level learners to improve their learning performance with limited and imbalanced data.
no code implementations • ICLR 2022 • Zhuolin Yang, Linyi Li, Xiaojun Xu, Bhavya Kailkhura, Tao Xie, Bo Li
Thus, to explore the conditions that guarantee to provide certifiably robust ensemble ML models, we first prove that diversified gradient and large confidence margin are sufficient and necessary conditions for certifiably robust ensemble models under the model-smoothness assumption.
1 code implementation • NeurIPS 2021 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels by transferring knowledge from a labeled source domain whose distribution differs from that of the target.
no code implementations • 25 Jun 2021 • Donald Loveland, Shusen Liu, Bhavya Kailkhura, Anna Hiszpanski, Yong Han
Graph neural network (GNN) explanations have largely been facilitated through post-hoc introspection.
no code implementations • ICML Workshop AML 2021 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
However, the limited effect of poisoning is restricted to the setting where training and test data are from the same distribution.
2 code implementations • NeurIPS 2021 • James Diffenderfer, Brian R. Bartoldson, Shreya Chaganti, Jize Zhang, Bhavya Kailkhura
Successful adoption of deep learning (DL) in the wild requires models to be: (1) compact, (2) accurate, and (3) robust to distributional shifts.
no code implementations • 21 Apr 2021 • Kaidi Xu, Chenan Wang, Hao Cheng, Bhavya Kailkhura, Xue Lin, Ryan Goldhahn
To tackle the susceptibility of deep neural networks to examples, the adversarial training has been proposed which provides a notion of robust through an inner maximization problem presenting the first-order embedded within the outer minimization of the training loss.
no code implementations • 30 Mar 2021 • Cheng Chen, Bhavya Kailkhura, Ryan Goldhahn, Yi Zhou
Federated learning is an emerging data-private distributed learning framework, which, however, is vulnerable to adversarial attacks.
1 code implementation • 17 Mar 2021 • James Diffenderfer, Bhavya Kailkhura
In this paper, we propose (and prove) a stronger Multi-Prize Lottery Ticket Hypothesis: A sufficiently over-parameterized neural network with random weights contains several subnetworks (winning tickets) that (a) have comparable accuracy to a dense target network with learned weights (prize 1), (b) do not require any further training to achieve prize 1 (prize 2), and (c) is robust to extreme forms of quantization (i. e., binary weights and/or activation) (prize 3).
Ranked #1 on
Classification with Binary Neural Network
on CIFAR-10
(Top-1 metric)
Classification with Binary Neural Network
Classification with Binary Weight Network
+1
no code implementations • 15 Jan 2021 • Xiaoyang Wang, Bo Li, Yibo Zhang, Bhavya Kailkhura, Klara Nahrstedt
However, these AutoML pipelines only focus on improving the learning accuracy of benign samples while ignoring the ML model robustness under adversarial attacks.
no code implementations • ICLR 2021 • James Diffenderfer, Bhavya Kailkhura
A sufficiently over-parameterized neural network with random weights contains several subnetworks (winning tickets) that (a) have comparable accuracy to a dense target network with learned weights (prize 1), (b) do not require any further training to achieve prize 1 (prize 2), and (c) is robust to extreme forms of quantization (i. e., binary weights and/or activation) (prize 3).
no code implementations • ICCV 2021 • MingJie Sun, Zichao Li, Chaowei Xiao, Haonan Qiu, Bhavya Kailkhura, Mingyan Liu, Bo Li
Specifically, EdgeNetRob and EdgeGANRob first explicitly extract shape structure features from a given image via an edge detection algorithm.
3 code implementations • 3 Dec 2020 • Tejas Gokhale, Rushil Anirudh, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Chitta Baral, Yezhou Yang
While this deviation may not be exactly known, its broad characterization is specified a priori, in terms of attributes.
no code implementations • 2 Dec 2020 • Jize Zhang, Bhavya Kailkhura, T. Yong-Jin Han
In this paper, we leverage predictive uncertainty of deep neural networks to answer challenging questions material scientists usually encounter in machine learning based materials applications workflows.
1 code implementation • CVPR 2021 • Akshay Mehra, Bhavya Kailkhura, Pin-Yu Chen, Jihun Hamm
Moreover, our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods such as Gaussian data augmentation\cite{cohen2019certified}, MACER\cite{zhai2020macer}, and SmoothAdv\cite{salman2019provably} that achieve high certified adversarial robustness.
no code implementations • NeurIPS 2020 • Bhavya Kailkhura, Jayaraman J. Thiagarajan, Qunwei Li, Jize Zhang, Yi Zhou, Timo Bremer
Using this framework, we show that space-filling sample designs, such as blue noise and Poisson disk sampling, which optimize spectral properties, outperform random designs in terms of the generalization gap and characterize this gain in a closed-form.
no code implementations • 22 Sep 2020 • Cheng Chen, Ziyi Chen, Yi Zhou, Bhavya Kailkhura
We develop FedCluster--a novel federated learning framework with improved optimization efficiency, and investigate its theoretical convergence properties.
1 code implementation • 18 Jul 2020 • Ankur Mallick, Chaitanya Dwivedi, Bhavya Kailkhura, Gauri Joshi, T. Yong-Jin Han
In this work, we show that the uncertainty estimation capability of state-of-the-art BNNs and Deep Ensemble models degrades significantly when the amount of training data is small.
no code implementations • 16 Jul 2020 • Shusen Liu, Bhavya Kailkhura, Jize Zhang, Anna M. Hiszpanski, Emily Robertson, Donald Loveland, T. Yong-Jin Han
The scientific community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges.
no code implementations • 30 Jun 2020 • Shusen Liu, Bhavya Kailkhura, Jize Zhang, Anna M. Hiszpanski, Emily Robertson, Donald Loveland, T. Yong-Jin Han
The scientific community has been increasingly interested in harnessing the power of deep learning to solve various domain challenges.
1 code implementation • ICML 2020 • Boyuan Pan, Yazheng Yang, Kaizhao Liang, Bhavya Kailkhura, Zhongming Jin, Xian-Sheng Hua, Deng Cai, Bo Li
Recent advances in maximizing mutual information (MI) between the source and target have demonstrated its effectiveness in text generation.
no code implementations • 11 Jun 2020 • Sijia Liu, Pin-Yu Chen, Bhavya Kailkhura, Gaoyuan Zhang, Alfred Hero, Pramod K. Varshney
Zeroth-order (ZO) optimization is a subset of gradient-free optimization that emerges in many signal processing and machine learning applications.
1 code implementation • 16 Mar 2020 • Jize Zhang, Bhavya Kailkhura, T. Yong-Jin Han
We show that none of the existing methods satisfy all three requirements, and demonstrate how Mix-n-Match calibration strategies (i. e., ensemble and composition) can help achieve remarkably better data-efficiency and expressive power while provably maintaining the classification accuracy of the original classifier.
no code implementations • 16 Mar 2020 • Saikiran Bulusu, Bhavya Kailkhura, Bo Li, Pramod K. Varshney, Dawn Song
This survey tries to provide a structured and comprehensive overview of the research on anomaly detection for DL based applications.
5 code implementations • NeurIPS 2020 • Kaidi Xu, Zhouxing Shi, huan zhang, Yihan Wang, Kai-Wei Chang, Minlie Huang, Bhavya Kailkhura, Xue Lin, Cho-Jui Hsieh
Linear relaxation based perturbation analysis (LiRPA) for neural networks, which computes provable linear bounds of output neurons given a certain amount of input perturbation, has become a core component in robustness verification and certified defense.
1 code implementation • 27 Feb 2020 • Linyi Li, Maurice Weber, Xiaojun Xu, Luka Rimanic, Bhavya Kailkhura, Tao Xie, Ce Zhang, Bo Li
Moreover, to the best of our knowledge, TSS is the first approach that achieves nontrivial certified robustness on the large-scale ImageNet dataset.
no code implementations • 25 Feb 2020 • Kaidi Xu, Sijia Liu, Pin-Yu Chen, Mengshu Sun, Caiwen Ding, Bhavya Kailkhura, Xue Lin
To overcome these limitations, we propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs in a generic and an efficient manner.
no code implementations • 16 Dec 2019 • Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Timo Bremer
However, PGD is a brittle optimization technique that fails to identify the right projection (or latent vector) when the observation is corrupted, or perturbed even by a small amount.
no code implementations • 5 Dec 2019 • J. Luc Peterson, Ben Bay, Joe Koning, Peter Robinson, Jessica Semler, Jeremy White, Rushil Anirudh, Kevin Athey, Peer-Timo Bremer, Francesco Di Natale, David Fox, Jim A. Gaffney, Sam A. Jacobs, Bhavya Kailkhura, Bogdan Kustowski, Steven Langer, Brian Spears, Jayaraman Thiagarajan, Brian Van Essen, Jae-Seung Yeom
With the growing complexity of computational and experimental facilities, many scientific researchers are turning to machine learning (ML) techniques to analyze large scale ensemble data.
1 code implementation • CVPR 2021 • Ruoxi Jia, Fan Wu, Xuehui Sun, Jiacen Xu, David Dao, Bhavya Kailkhura, Ce Zhang, Bo Li, Dawn Song
Quantifying the importance of each training point to a learning task is a fundamental problem in machine learning and the estimated importance scores have been leveraged to guide a range of data workflows such as data summarization and domain adaption.
1 code implementation • 13 Oct 2019 • Ankur Mallick, Chaitanya Dwivedi, Bhavya Kailkhura, Gauri Joshi, T. Yong-Jin Han
Experiments on a variety of datasets show that our approach outperforms the state-of-the-art in GP kernel learning in both supervised and semi-supervised settings.
1 code implementation • ICCV 2019 • Pu Zhao, Sijia Liu, Pin-Yu Chen, Nghia Hoang, Kaidi Xu, Bhavya Kailkhura, Xue Lin
Robust machine learning is currently one of the most prominent topics which could potentially help shaping a future of advanced AI platforms that not only perform well in average cases but also in worst cases or adverse situations.
no code implementations • 6 Jul 2019 • Shusen Liu, Bhavya Kailkhura, Donald Loveland, Yong Han
In this work, we propose an introspection technique for deep neural networks that relies on a generative model to instigate salient editing of the input image for model interpretation.
2 code implementations • NeurIPS 2021 • Yunhui Long, Boxin Wang, Zhuolin Yang, Bhavya Kailkhura, Aston Zhang, Carl A. Gunter, Bo Li
In particular, we train a student data generator with an ensemble of teacher discriminators and propose a novel private gradient aggregation mechanism to ensure differential privacy on all information that flows from teacher discriminators to the student generator.
no code implementations • 6 Jun 2019 • Bhavya Kailkhura, Jayaraman J. Thiagarajan, Qunwei Li, Peer-Timo Bremer
This paper provides a general framework to study the effect of sampling properties of training data on the generalization error of the learned machine learning (ML) models.
no code implementations • 5 Jan 2019 • Bhavya Kailkhura, Brian Gallagher, Sookyung Kim, Anna Hiszpanski, T. Yong-Jin Han
We also propose a transfer learning technique and show that the performance loss due to models' simplicity can be overcome by exploiting correlations among different material properties.
no code implementations • 22 Nov 2018 • Qunwei Li, Bhavya Kailkhura, Rushil Anirudh, Yi Zhou, Yingbin Liang, Pramod Varshney
Despite the growing interest in generative adversarial networks (GANs), training GANs remains a challenging problem, both from a theoretical and a practical standpoint.
no code implementations • 20 Nov 2018 • Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Timo Bremer
Solving inverse problems continues to be a central challenge in computer vision.
no code implementations • 9 Nov 2018 • Thomas A. Hogan, Bhavya Kailkhura
We study the problem of finding a universal (image-agnostic) perturbation to fool machine learning (ML) classifiers (e. g., neural nets, decision tress) in the hard-label black-box setting.
1 code implementation • 5 Sep 2018 • Gowtham Muniraju, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Peer-Timo Bremer, Cihan Tepedelenlioglu, Andreas Spanias
Sampling one or more effective solutions from large search spaces is a recurring idea in machine learning, and sequential optimization has become a popular solution.
1 code implementation • NeurIPS 2018 • Sijia Liu, Bhavya Kailkhura, Pin-Yu Chen, Pai-Shun Ting, Shiyu Chang, Lisa Amini
As application demands for zeroth-order (gradient-free) optimization accelerate, the need for variance reduced and faster converging approaches is also intensifying.
no code implementations • 18 May 2018 • Rushil Anirudh, Jayaraman J. Thiagarajan, Bhavya Kailkhura, Timo Bremer
We solve this by making successive estimates on the model and the solution in an iterative fashion.
no code implementations • 29 Jan 2018 • Aditya Vempaty, Bhavya Kailkhura, Pramod K. Varshney
The emerging paradigm of Human-Machine Inference Networks (HuMaINs) combines complementary cognitive strengths of humans and machines in an intelligent manner to tackle various inference tasks and achieves higher performance than either humans or machines by themselves.
no code implementations • 16 Dec 2017 • Bhavya Kailkhura, Jayaraman J. Thiagarajan, Charvi Rastogi, Pramod K. Varshney, Peer-Timo Bremer
Third, we propose an efficient estimator to evaluate the space-filling properties of sample designs in arbitrary dimensions and use it to develop an optimization framework to generate high quality space-filling designs.
no code implementations • 14 Oct 2017 • Qunwei Li, Bhavya Kailkhura, Ryan Goldhahn, Priyadip Ray, Pramod K. Varshney
We also provide conditions on the erroneous updates for exact convergence to the optimal solution.
no code implementations • 14 Dec 2016 • Jayaraman J. Thiagarajan, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy, Bhavya Kailkhura
In this paper, we propose the use of quantile analysis to obtain local scale estimates for neighborhood graph construction.
no code implementations • 30 Nov 2016 • Qunwei Li, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Zhenliang Zhang, Pramod K. Varshney
Influential node detection is a central research topic in social network analysis.
no code implementations • 22 Nov 2016 • Jayaraman J. Thiagarajan, Bhavya Kailkhura, Prasanna Sattigeri, Karthikeyan Natesan Ramamurthy
In this paper, we take a step in the direction of tackling the problem of interpretability without compromising the model accuracy.
no code implementations • 22 Jan 2016 • Prashant Khanduri, Bhavya Kailkhura, Jayaraman J. Thiagarajan, Pramod K. Varshney
This paper considers the problem of high dimensional signal detection in a large distributed network whose nodes can collaborate with their one-hop neighboring nodes (spatial collaboration).
no code implementations • 14 Apr 2015 • Bhavya Kailkhura, Swastik Brahma, Pramod K. Varshney
This paper considers the problem of detection in distributed networks in the presence of data falsification (Byzantine) attacks.