Search Results for author: Jennifer Dy

Found 41 papers, 15 papers with code

ADAPT to Robustify Prompt Tuning Vision Transformers

no code implementations19 Mar 2024 Masih Eskandar, Tooba Imtiaz, Zifeng Wang, Jennifer Dy

The performance of deep models, including Vision Transformers, is known to be vulnerable to adversarial attacks.

Adversarial Defense

Multiverse at the Edge: Interacting Real World and Digital Twins for Wireless Beamforming

no code implementations10 May 2023 Batool Salehi, Utku Demir, Debashri Roy, Suyash Pradhan, Jennifer Dy, Stratis Ioannidis, Kaushik Chowdhury

To achieve this, we go beyond instantiating a single twin and propose the 'Multiverse' paradigm, with several possible digital twins attempting to capture the real world at different levels of fidelity.

Decision Making Self-Learning

Explanations of Black-Box Models based on Directional Feature Interactions

1 code implementation ICLR 2022 Aria Masoomi, Davin Hill, Zhonghui Xu, Craig P Hersh, Edwin K. Silverman, Peter J. Castaldi, Stratis Ioannidis, Jennifer Dy

As machine learning algorithms are deployed ubiquitously to a variety of domains, it is imperative to make these often black-box models transparent.

Geometry of Score Based Generative Models

no code implementations9 Feb 2023 Sandesh Ghimire, Jinyang Liu, Armand Comas, Davin Hill, Aria Masoomi, Octavia Camps, Jennifer Dy

We demonstrate that looking from geometric perspective enables us to answer many of these questions and provide new interpretations to some known results.

Bayesian Inference

Divide and Compose with Score Based Generative Models

no code implementations5 Feb 2023 Sandesh Ghimire, Armand Comas, Davin Hill, Aria Masoomi, Octavia Camps, Jennifer Dy

Towards the direction of having more control over image manipulation and conditional generation, we propose to learn image components in an unsupervised manner so that we can compose those components to generate and manipulate images in informed manner.

Disentanglement Image Generation +1

High-precision regressors for particle physics

1 code implementation2 Feb 2023 Fady Bishara, Ayan Paul, Jennifer Dy

Since the necessary number of data points per simulation is on the order of $10^9$ - $10^{12}$, machine learning regressors can be used in place of physics simulators to significantly reduce this computational burden.

Vocal Bursts Intensity Prediction

QueryForm: A Simple Zero-shot Form Entity Query Framework

no code implementations14 Nov 2022 Zifeng Wang, Zizhao Zhang, Jacob Devlin, Chen-Yu Lee, Guolong Su, Hao Zhang, Jennifer Dy, Vincent Perot, Tomas Pfister

Zero-shot transfer learning for document understanding is a crucial yet under-investigated scenario to help reduce the high cost involved in annotating document entities.

document understanding Transfer Learning

Boundary-Aware Uncertainty for Feature Attribution Explainers

1 code implementation5 Oct 2022 Davin Hill, Aria Masoomi, Max Torop, Sandesh Ghimire, Jennifer Dy

In this work we propose the Gaussian Process Explanation UnCertainty (GPEC) framework, which generates a unified uncertainty estimate combining decision boundary-aware uncertainty with explanation function approximation uncertainty.

SparCL: Sparse Continual Learning on the Edge

1 code implementation20 Sep 2022 Zifeng Wang, Zheng Zhan, Yifan Gong, Geng Yuan, Wei Niu, Tong Jian, Bin Ren, Stratis Ioannidis, Yanzhi Wang, Jennifer Dy

SparCL achieves both training acceleration and accuracy preservation through the synergy of three aspects: weight sparsity, data efficiency, and gradient sparsity.

Continual Learning

Analyzing Explainer Robustness via Lipschitzness of Prediction Functions

no code implementations24 Jun 2022 Zulqarnain Khan, Davin Hill, Aria Masoomi, Joshua Bone, Jennifer Dy

We provide lower bound guarantees on the astuteness of a variety of explainers (e. g., SHAP, RISE, CXPlain) given the Lipschitzness of the prediction function.

Deep Layer-wise Networks Have Closed-Form Weights

no code implementations1 Feb 2022 Chieh Wu, Aria Masoomi, Arthur Gretton, Jennifer Dy

There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP).

Deep Learning on Multimodal Sensor Data at the Wireless Edge for Vehicular Network

1 code implementation12 Jan 2022 Batool Salehi, Guillem Reus-Muns, Debashri Roy, Zifeng Wang, Tong Jian, Jennifer Dy, Stratis Ioannidis, Kaushik Chowdhury

Beam selection for millimeter-wave links in a vehicular scenario is a challenging problem, as an exhaustive search among all candidate beam pairs cannot be assuredly completed within short contact times.

Edge-computing

Learning to Prompt for Continual Learning

4 code implementations CVPR 2022 Zifeng Wang, Zizhao Zhang, Chen-Yu Lee, Han Zhang, Ruoxi Sun, Xiaoqi Ren, Guolong Su, Vincent Perot, Jennifer Dy, Tomas Pfister

The mainstream paradigm behind continual learning has been to adapt the model parameters to non-stationary data distributions, where catastrophic forgetting is the central challenge.

Class Incremental Learning Image Classification

Reliable Estimation of KL Divergence using a Discriminator in Reproducing Kernel Hilbert Space

no code implementations NeurIPS 2021 Sandesh Ghimire, Aria Masoomi, Jennifer Dy

To achieve this objective, we 1) present a novel construction of the discriminator in the Reproducing Kernel Hilbert Space (RKHS), 2) theoretically relate the error probability bound of the KL estimates to the complexity of the discriminator in the RKHS space, 3) present a scalable way to control the complexity (RKHS norm) of the discriminator for a reliable estimation of KL divergence, and 4) prove the consistency of the proposed estimator.

Learning Theory

Deep Bayesian Unsupervised Lifelong Learning

1 code implementation13 Jun 2021 Tingting Zhao, Zifeng Wang, Aria Masoomi, Jennifer Dy

We develop a fully Bayesian inference framework for ULL with a novel end-to-end Deep Bayesian Unsupervised Lifelong Learning (DBULL) algorithm, which can progressively discover new clusters without forgetting the past with unlabelled data while learning latent representations.

Bayesian Inference

Revisiting Hilbert-Schmidt Information Bottleneck for Adversarial Robustness

1 code implementation NeurIPS 2021 Zifeng Wang, Tong Jian, Aria Masoomi, Stratis Ioannidis, Jennifer Dy

We investigate the HSIC (Hilbert-Schmidt independence criterion) bottleneck as a regularizer for learning an adversarially robust deep neural network classifier.

Adversarial Robustness

On the Sample Complexity of Rank Regression from Pairwise Comparisons

no code implementations4 May 2021 Berkan Kadioglu, Peng Tian, Jennifer Dy, Deniz Erdogmus, Stratis Ioannidis

We consider a rank regression setting, in which a dataset of $N$ samples with features in $\mathbb{R}^d$ is ranked by an oracle via $M$ pairwise comparisons.

regression

Machine Learning on Camera Images for Fast mmWave Beamforming

no code implementations15 Feb 2021 Batool Salehi, Mauro Belgiovine, Sara Garcia Sanchez, Jennifer Dy, Stratis Ioannidis, Kaushik Chowdhury

Perfect alignment in chosen beam sectors at both transmit- and receive-nodes is required for beamforming in mmWave bands.

BIG-bench Machine Learning

Learn-Prune-Share for Lifelong Learning

1 code implementation13 Dec 2020 Zifeng Wang, Tong Jian, Kaushik Chowdhury, Yanzhi Wang, Jennifer Dy, Stratis Ioannidis

In lifelong learning, we wish to maintain and update a model (e. g., a neural network classifier) in the presence of new classification tasks that arrive sequentially.

Open-World Class Discovery with Kernel Networks

1 code implementation13 Dec 2020 Zifeng Wang, Batool Salehi, Andrey Gritsenko, Kaushik Chowdhury, Stratis Ioannidis, Jennifer Dy

We study an Open-World Class Discovery problem in which, given labeled training samples from old classes, we need to discover new classes from unlabeled test samples.

Instance-wise Feature Grouping

no code implementations NeurIPS 2020 Aria Masoomi, Chieh Wu, Tingting Zhao, Zifeng Wang, Peter Castaldi, Jennifer Dy

Moreover, the features that belong to each group, and the important feature groups may vary per sample.

General Classification

Kernel Dependence Network

no code implementations4 Nov 2020 Chieh Wu, Aria Masoomi, Arthur Gretton, Jennifer Dy

We propose a greedy strategy to spectrally train a deep network for multi-class classification.

Multi-class Classification

Deep Layer-wise Networks Have Closed-Form Weights

no code implementations15 Jun 2020 Chieh Wu, Aria Masoomi, Arthur Gretton, Jennifer Dy

There is currently a debate within the neuroscience community over the likelihood of the brain performing backpropagation (BP).

Multi-class Classification

Deep Markov Spatio-Temporal Factorization

1 code implementation22 Mar 2020 Amirreza Farnoosh, Behnaz Rezaei, Eli Zachary Sennesh, Zulqarnain Khan, Jennifer Dy, Ajay Satpute, J. Benjamin Hutchinson, Jan-Willem van de Meent, Sarah Ostadabbas

This results in a flexible family of hierarchical deep generative factor analysis models that can be extended to perform time series clustering or perform factor analysis in the presence of a control signal.

Clustering Time Series +3

Weighting Is Worth the Wait: Bayesian Optimization with Importance Sampling

no code implementations23 Feb 2020 Setareh Ariafar, Zelda Mariet, Ehsan Elhamifar, Dana Brooks, Jennifer Dy, Jasper Snoek

Casting hyperparameter search as a multi-task Bayesian optimization problem over both hyperparameters and importance sampling design achieves the best of both worlds: by learning a parameterization of IS that trades-off evaluation complexity and quality, we improve upon Bayesian optimization state-of-the-art runtime and final validation error across a variety of datasets and complex neural architectures.

Bayesian Optimization

Segmentation of Cellular Patterns in Confocal Images of Melanocytic Lesions in vivo via a Multiscale Encoder-Decoder Network (MED-Net)

no code implementations3 Jan 2020 Kivanc Kose, Alican Bozkurt, Christi Alessi-Fox, Melissa Gill, Caterina Longo, Giovanni Pellacani, Jennifer Dy, Dana H. Brooks, Milind Rajadhyaksha

We trained and tested our model on non-overlapping partitions of 117 reflectance confocal microscopy (RCM) mosaics of melanocytic lesions, an extensive dataset for this application, collected at four clinics in the US, and two in Italy.

Segmentation Semantic Segmentation +1

Solving Interpretable Kernel Dimensionality Reduction

no code implementations NeurIPS 2019 Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy

While KDR methods can be easily solved by keeping the most dominant eigenvectors of the kernel matrix, its features are no longer easy to interpret.

Clustering Dimensionality Reduction

Spectral Non-Convex Optimization for Dimension Reduction with Hilbert-Schmidt Independence Criterion

no code implementations6 Sep 2019 Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy

The Hilbert Schmidt Independence Criterion (HSIC) is a kernel dependence measure that has applications in various aspects of machine learning.

Clustering Dimensionality Reduction

Solving Interpretable Kernel Dimension Reduction

no code implementations6 Sep 2019 Chieh Wu, Jared Miller, Yale Chang, Mario Sznaier, Jennifer Dy

While KDR methods can be easily solved by keeping the most dominant eigenvectors of the kernel matrix, its features are no longer easy to interpret.

Clustering Dimensionality Reduction

Deep Kernel Learning for Clustering

no code implementations9 Aug 2019 Chieh Wu, Zulqarnain Khan, Yale Chang, Stratis Ioannidis, Jennifer Dy

We propose a deep learning approach for discovering kernels tailored to identifying clusters over sample data.

Clustering Deep Clustering

Accelerated Experimental Design for Pairwise Comparisons

1 code implementation18 Jan 2019 Yuan Guo, Jennifer Dy, Deniz Erdogmus, Jayashree Kalpathy-Cramer, Susan Ostmo, J. Peter Campbell, Michael F. Chiang, Stratis Ioannidis

Pairwise comparison labels are more informative and less variable than class labels, but generating them poses a challenge: their number grows quadratically in the dataset size.

Experimental Design

Quantifying Uncertainty in Discrete-Continuous and Skewed Data with Bayesian Deep Learning

1 code implementation13 Feb 2018 Thomas Vandal, Evan Kodra, Jennifer Dy, Sangram Ganguly, Ramakrishna Nemani, Auroop R. Ganguly

Furthermore, we find that the lognormal distribution, which can handle skewed distributions, produces quality uncertainty estimates at the extremes.

Management Super-Resolution +1

Evaluating Crowdsourcing Participants in the Absence of Ground-Truth

no code implementations30 May 2016 Ramanathan Subramanian, Romer Rosales, Glenn Fung, Jennifer Dy

Given a supervised/semi-supervised learning scenario where multiple annotators are available, we consider the problem of identification of adversarial or unreliable annotators.

Cannot find the paper you are looking for? You can Submit a new open access paper.