no code implementations • 2 Nov 2024 • Davin Hill, Josh Bone, Aria Masoomi, Max Torop, Jennifer Dy
Explainability methods are often challenging to evaluate and compare.
no code implementations • 19 Mar 2024 • Masih Eskandar, Tooba Imtiaz, Zifeng Wang, Jennifer Dy
The performance of deep models, including Vision Transformers, is known to be vulnerable to adversarial attacks.
no code implementations • 10 May 2023 • Batool Salehi, Utku Demir, Debashri Roy, Suyash Pradhan, Jennifer Dy, Stratis Ioannidis, Kaushik Chowdhury
To achieve this, we go beyond instantiating a single twin and propose the 'Multiverse' paradigm, with several possible digital twins attempting to capture the real world at different levels of fidelity.
no code implementations • 30 Apr 2023 • Zifeng Wang, Zheng Zhan, Yifan Gong, Yucai Shao, Stratis Ioannidis, Yanzhi Wang, Jennifer Dy
Rehearsal-based approaches are a mainstay of continual learning (CL).
1 code implementation • ICLR 2022 • Aria Masoomi, Davin Hill, Zhonghui Xu, Craig P Hersh, Edwin K. Silverman, Peter J. Castaldi, Stratis Ioannidis, Jennifer Dy
As machine learning algorithms are deployed ubiquitously to a variety of domains, it is imperative to make these often black-box models transparent.
no code implementations • 9 Feb 2023 • Sandesh Ghimire, Jinyang Liu, Armand Comas, Davin Hill, Aria Masoomi, Octavia Camps, Jennifer Dy
We demonstrate that looking from geometric perspective enables us to answer many of these questions and provide new interpretations to some known results.
no code implementations • 5 Feb 2023 • Sandesh Ghimire, Armand Comas, Davin Hill, Aria Masoomi, Octavia Camps, Jennifer Dy
Towards the direction of having more control over image manipulation and conditional generation, we propose to learn image components in an unsupervised manner so that we can compose those components to generate and manipulate images in informed manner.
1 code implementation • 2 Feb 2023 • Fady Bishara, Ayan Paul, Jennifer Dy
Since the necessary number of data points per simulation is on the order of $10^9$ - $10^{12}$, machine learning regressors can be used in place of physics simulators to significantly reduce this computational burden.
no code implementations • 14 Dec 2022 • Tooba Imtiaz, Morgan Kohler, Jared Miller, Zifeng Wang, Mario Sznaier, Octavia Camps, Jennifer Dy
Adversarial attacks hamper the decision-making ability of neural networks by perturbing the input signal.
no code implementations • 14 Nov 2022 • Zifeng Wang, Zizhao Zhang, Jacob Devlin, Chen-Yu Lee, Guolong Su, Hao Zhang, Jennifer Dy, Vincent Perot, Tomas Pfister
Zero-shot transfer learning for document understanding is a crucial yet under-investigated scenario to help reduce the high cost involved in annotating document entities.
1 code implementation • 9 Oct 2022 • Tong Jian, Zifeng Wang, Yanzhi Wang, Jennifer Dy, Stratis Ioannidis
Adversarial pruning compresses models while preserving robustness.
1 code implementation • 5 Oct 2022 • Davin Hill, Aria Masoomi, Max Torop, Sandesh Ghimire, Jennifer Dy
In this work we propose the Gaussian Process Explanation UnCertainty (GPEC) framework, which generates a unified uncertainty estimate combining decision boundary-aware uncertainty with explanation function approximation uncertainty.
1 code implementation • 20 Sep 2022 • Zifeng Wang, Zheng Zhan, Yifan Gong, Geng Yuan, Wei Niu, Tong Jian, Bin Ren, Stratis Ioannidis, Yanzhi Wang, Jennifer Dy
SparCL achieves both training acceleration and accuracy preservation through the synergy of three aspects: weight sparsity, data efficiency, and gradient sparsity.
no code implementations • 24 Jun 2022 • Zulqarnain Khan, Davin Hill, Aria Masoomi, Joshua Bone, Jennifer Dy
We provide lower bound guarantees on the astuteness of a variety of explainers (e. g., SHAP, RISE, CXPlain) given the Lipschitzness of the prediction function.
3 code implementations • 10 Apr 2022 • Zifeng Wang, Zizhao Zhang, Sayna Ebrahimi, Ruoxi Sun, Han Zhang, Chen-Yu Lee, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, Tomas Pfister
Continual learning aims to enable a single model to learn a sequence of tasks without catastrophic forgetting.
no code implementations • 1 Feb 2022 • Chieh Wu, Aria Masoomi, Arthur Gretton, Jennifer Dy
There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP).
1 code implementation • 12 Jan 2022 • Batool Salehi, Guillem Reus-Muns, Debashri Roy, Zifeng Wang, Tong Jian, Jennifer Dy, Stratis Ioannidis, Kaushik Chowdhury
Beam selection for millimeter-wave links in a vehicular scenario is a challenging problem, as an exhaustive search among all candidate beam pairs cannot be assuredly completed within short contact times.
5 code implementations • CVPR 2022 • Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, Tomas Pfister
The mainstream paradigm behind continual learning has been to adapt the model parameters to non-stationary data distributions, where catastrophic forgetting is the central challenge.
no code implementations • 8 Nov 2021 • Max Torop, Sandesh Ghimire, Wenqian Liu, Dana H. Brooks, Octavia Camps, Milind Rajadhyaksha, Jennifer Dy, Kivanc Kose
There are limited works showing the efficacy of unsupervised Out-of-Distribution (OOD) methods on complex medical data.
no code implementations • NeurIPS 2021 • Sandesh Ghimire, Aria Masoomi, Jennifer Dy
To achieve this objective, we 1) present a novel construction of the discriminator in the Reproducing Kernel Hilbert Space (RKHS), 2) theoretically relate the error probability bound of the KL estimates to the complexity of the discriminator in the RKHS space, 3) present a scalable way to control the complexity (RKHS norm) of the discriminator for a reliable estimation of KL divergence, and 4) prove the consistency of the proposed estimator.
1 code implementation • 13 Jun 2021 • Tingting Zhao, Zifeng Wang, Aria Masoomi, Jennifer Dy
We develop a fully Bayesian inference framework for ULL with a novel end-to-end Deep Bayesian Unsupervised Lifelong Learning (DBULL) algorithm, which can progressively discover new clusters without forgetting the past with unlabelled data while learning latent representations.
1 code implementation • NeurIPS 2021 • Zifeng Wang, Tong Jian, Aria Masoomi, Stratis Ioannidis, Jennifer Dy
We investigate the HSIC (Hilbert-Schmidt independence criterion) bottleneck as a regularizer for learning an adversarially robust deep neural network classifier.
no code implementations • 4 May 2021 • Berkan Kadioglu, Peng Tian, Jennifer Dy, Deniz Erdogmus, Stratis Ioannidis
We consider a rank regression setting, in which a dataset of $N$ samples with features in $\mathbb{R}^d$ is ranked by an oracle via $M$ pairwise comparisons.
no code implementations • 15 Feb 2021 • Batool Salehi, Mauro Belgiovine, Sara Garcia Sanchez, Jennifer Dy, Stratis Ioannidis, Kaushik Chowdhury
Perfect alignment in chosen beam sectors at both transmit- and receive-nodes is required for beamforming in mmWave bands.
1 code implementation • 13 Dec 2020 • Zifeng Wang, Tong Jian, Kaushik Chowdhury, Yanzhi Wang, Jennifer Dy, Stratis Ioannidis
In lifelong learning, we wish to maintain and update a model (e. g., a neural network classifier) in the presence of new classification tasks that arrive sequentially.
1 code implementation • 13 Dec 2020 • Zifeng Wang, Batool Salehi, Andrey Gritsenko, Kaushik Chowdhury, Stratis Ioannidis, Jennifer Dy
We study an Open-World Class Discovery problem in which, given labeled training samples from old classes, we need to discover new classes from unlabeled test samples.
no code implementations • NeurIPS 2020 • Aria Masoomi, Chieh Wu, Tingting Zhao, Zifeng Wang, Peter Castaldi, Jennifer Dy
Moreover, the features that belong to each group, and the important feature groups may vary per sample.
no code implementations • 4 Nov 2020 • Chieh Wu, Aria Masoomi, Arthur Gretton, Jennifer Dy
We propose a greedy strategy to spectrally train a deep network for multi-class classification.
no code implementations • 15 Jun 2020 • Chieh Wu, Aria Masoomi, Arthur Gretton, Jennifer Dy
There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP).
1 code implementation • 22 Mar 2020 • Amirreza Farnoosh, Behnaz Rezaei, Eli Zachary Sennesh, Zulqarnain Khan, Jennifer Dy, Ajay Satpute, J. Benjamin Hutchinson, Jan-Willem van de Meent, Sarah Ostadabbas
This results in a flexible family of hierarchical deep generative factor analysis models that can be extended to perform time series clustering or perform factor analysis in the presence of a control signal.
no code implementations • 23 Feb 2020 • Setareh Ariafar, Zelda Mariet, Ehsan Elhamifar, Dana Brooks, Jennifer Dy, Jasper Snoek
Casting hyperparameter search as a multi-task Bayesian optimization problem over both hyperparameters and importance sampling design achieves the best of both worlds: by learning a parameterization of IS that trades-off evaluation complexity and quality, we improve upon Bayesian optimization state-of-the-art runtime and final validation error across a variety of datasets and complex neural architectures.
no code implementations • 3 Jan 2020 • Kivanc Kose, Alican Bozkurt, Christi Alessi-Fox, Melissa Gill, Caterina Longo, Giovanni Pellacani, Jennifer Dy, Dana H. Brooks, Milind Rajadhyaksha
We trained and tested our model on non-overlapping partitions of 117 reflectance confocal microscopy (RCM) mosaics of melanocytic lesions, an extensive dataset for this application, collected at four clinics in the US, and two in Italy.
no code implementations • NeurIPS 2019 • Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy
While KDR methods can be easily solved by keeping the most dominant eigenvectors of the kernel matrix, its features are no longer easy to interpret.
no code implementations • 6 Sep 2019 • Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy
While KDR methods can be easily solved by keeping the most dominant eigenvectors of the kernel matrix, its features are no longer easy to interpret.
no code implementations • 6 Sep 2019 • Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy
The Hilbert Schmidt Independence Criterion (HSIC) is a kernel dependence measure that has applications in various aspects of machine learning.
no code implementations • 9 Aug 2019 • Chieh Wu, Zulqarnain Khan, Yale Chang, Stratis Ioannidis, Jennifer Dy
We propose a deep learning approach for discovering kernels tailored to identifying clusters over sample data.
no code implementations • NeurIPS 2020 • Eli Sennesh, Zulqarnain Khan, Yiyu Wang, Jennifer Dy, Ajay B. Satpute, J. Benjamin Hutchinson, Jan-Willem van de Meent
Neuroimaging studies produce gigabytes of spatio-temporal data for a small number of participants and stimuli.
1 code implementation • 18 Jan 2019 • Yuan Guo, Jennifer Dy, Deniz Erdogmus, Jayashree Kalpathy-Cramer, Susan Ostmo, J. Peter Campbell, Michael F. Chiang, Stratis Ioannidis
Pairwise comparison labels are more informative and less variable than class labels, but generating them poses a challenge: their number grows quadratically in the dataset size.
no code implementations • 6 Nov 2018 • Szu-Yeu Hu, Andrew Beers, Ken Chang, Kathi Höbel, J. Peter Campbell, Deniz Erdogumus, Stratis Ioannidis, Jennifer Dy, Michael F. Chiang, Jayashree Kalpathy-Cramer, James M. Brown
In this paper, we propose a new pre-training scheme for U-net based image segmentation.
no code implementations • 6 Apr 2018 • Babak Esmaeili, Hao Wu, Sarthak Jain, Alican Bozkurt, N. Siddharth, Brooks Paige, Dana H. Brooks, Jennifer Dy, Jan-Willem van de Meent
Deep latent-variable models learn representations of high-dimensional data in an unsupervised manner.
1 code implementation • 13 Feb 2018 • Thomas Vandal, Evan Kodra, Jennifer Dy, Sangram Ganguly, Ramakrishna Nemani, Auroop R. Ganguly
Furthermore, we find that the lognormal distribution, which can handle skewed distributions, produces quality uncertainty estimates at the extremes.
no code implementations • 30 May 2016 • Ramanathan Subramanian, Romer Rosales, Glenn Fung, Jennifer Dy
Given a supervised/semi-supervised learning scenario where multiple annotators are available, we consider the problem of identification of adversarial or unreliable annotators.