Search Results for author: Jesse C. Cresswell

Found 17 papers, 13 papers with code

Deep Generative Models through the Lens of the Manifold Hypothesis: A Survey and New Connections

no code implementations3 Apr 2024 Gabriel Loaiza-Ganem, Brendan Leigh Ross, Rasa Hosseinzadeh, Anthony L. Caterini, Jesse C. Cresswell

This manifold lens provides both clarity as to why some DGMs (e. g. diffusion models and some generative adversarial networks) empirically surpass others (e. g. likelihood-based models such as variational autoencoders, normalizing flows, or energy-based models) at sample generation, and guidance for devising more performant DGMs.

A Geometric Explanation of the Likelihood OOD Detection Paradox

1 code implementation27 Mar 2024 Hamidreza Kamkari, Brendan Leigh Ross, Jesse C. Cresswell, Anthony L. Caterini, Rahul G. Krishnan, Gabriel Loaiza-Ganem

We also show that this scenario can be identified through local intrinsic dimension (LID) estimation, and propose a method for OOD detection which pairs the likelihoods and LID estimates obtained from a pre-trained DGM.

Conformal Prediction Sets Improve Human Decision Making

1 code implementation24 Jan 2024 Jesse C. Cresswell, Yi Sui, Bhargava Kumar, Noël Vouitsis

In response to everyday queries, humans explicitly signal uncertainty and offer alternative answers when they are unsure.

Conformal Prediction Decision Making

Self-supervised Representation Learning From Random Data Projectors

1 code implementation11 Oct 2023 Yi Sui, Tongzi Wu, Jesse C. Cresswell, Ga Wu, George Stein, Xiao Shi Huang, Xiaochen Zhang, Maksims Volkovs

Self-supervised representation learning~(SSRL) has advanced considerably by exploiting the transformation invariance assumption under artificially designed data augmentations.

Data Augmentation Representation Learning

Augment then Smooth: Reconciling Differential Privacy with Certified Robustness

no code implementations14 Jun 2023 Jiapeng Wu, Atiyeh Ashari Ghomi, David Glukhov, Jesse C. Cresswell, Franziska Boenisch, Nicolas Papernot

Differential privacy and randomized smoothing are effective defenses that provide certifiable guarantees for each of these threats, however, it is not well understood how implementing either defense impacts the other.

Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models

2 code implementations NeurIPS 2023 George Stein, Jesse C. Cresswell, Rasa Hosseinzadeh, Yi Sui, Brendan Leigh Ross, Valentin Villecroze, Zhaoyan Liu, Anthony L. Caterini, J. Eric T. Taylor, Gabriel Loaiza-Ganem

Comparing to 17 modern metrics for evaluating the overall performance, fidelity, diversity, rarity, and memorization of generative models, we find that the state-of-the-art perceptual realism of diffusion models as judged by humans is not reflected in commonly reported metrics such as FID.

Memorization

Denoising Deep Generative Models

1 code implementation30 Nov 2022 Gabriel Loaiza-Ganem, Brendan Leigh Ross, Luhuan Wu, John P. Cunningham, Jesse C. Cresswell, Anthony L. Caterini

Likelihood-based deep generative models have recently been shown to exhibit pathological behaviour under the manifold hypothesis as a consequence of using high-dimensional densities to model data with low-dimensional structure.

Denoising

CaloMan: Fast generation of calorimeter showers with density estimation on learned manifolds

no code implementations23 Nov 2022 Jesse C. Cresswell, Brendan Leigh Ross, Gabriel Loaiza-Ganem, Humberto Reyes-Gonzalez, Marco Letizia, Anthony L. Caterini

Precision measurements and new physics searches at the Large Hadron Collider require efficient simulations of particle propagation and interactions within the detectors.

Density Estimation

Find Your Friends: Personalized Federated Learning with the Right Collaborators

no code implementations12 Oct 2022 Yi Sui, Junfeng Wen, Yenson Lau, Brendan Leigh Ross, Jesse C. Cresswell

In the traditional federated learning setting, a central server coordinates a network of clients to train one global model.

Personalized Federated Learning

Verifying the Union of Manifolds Hypothesis for Image Data

1 code implementation6 Jul 2022 Bradley C. A. Brown, Anthony L. Caterini, Brendan Leigh Ross, Jesse C. Cresswell, Gabriel Loaiza-Ganem

Assuming that data lies on a single manifold implies intrinsic dimension is identical across the entire data space, and does not allow for subregions of this space to have a different number of factors of variation.

Inductive Bias

Neural Implicit Manifold Learning for Topology-Aware Density Estimation

1 code implementation22 Jun 2022 Brendan Leigh Ross, Gabriel Loaiza-Ganem, Anthony L. Caterini, Jesse C. Cresswell

We then learn the probability density within $\mathcal{M}$ with a constrained energy-based model, which employs a constrained variant of Langevin dynamics to train and sample from the learned manifold.

Density Estimation

Disparate Impact in Differential Privacy from Gradient Misalignment

1 code implementation15 Jun 2022 Maria S. Esipova, Atiyeh Ashari Ghomi, Yaqiao Luo, Jesse C. Cresswell

As machine learning becomes more widespread throughout society, aspects including data privacy and fairness must be carefully considered, and are crucial for deployment in highly regulated industries.

Fairness

Diagnosing and Fixing Manifold Overfitting in Deep Generative Models

2 code implementations14 Apr 2022 Gabriel Loaiza-Ganem, Brendan Leigh Ross, Jesse C. Cresswell, Anthony L. Caterini

We propose a class of two-step procedures consisting of a dimensionality reduction step followed by maximum-likelihood density estimation, and prove that they recover the data-generating distribution in the nonparametric regime, thus avoiding manifold overfitting.

Density Estimation Dimensionality Reduction

Decentralized Federated Learning through Proxy Model Sharing

1 code implementation22 Nov 2021 Shivam Kalra, Junfeng Wen, Jesse C. Cresswell, Maksims Volkovs, Hamid R. Tizhoosh

Institutions in highly regulated domains such as finance and healthcare often have restrictive rules around data sharing.

Federated Learning whole slide images

Tractable Density Estimation on Learned Manifolds with Conformal Embedding Flows

1 code implementation NeurIPS 2021 Brendan Leigh Ross, Jesse C. Cresswell

Normalizing flows are generative models that provide tractable density estimation via an invertible transformation from a simple base distribution to a complex target distribution.

Density Estimation

C-Learning: Horizon-Aware Cumulative Accessibility Estimation

1 code implementation ICLR 2021 Panteha Naderian, Gabriel Loaiza-Ganem, Harry J. Braviner, Anthony L. Caterini, Jesse C. Cresswell, Tong Li, Animesh Garg

In order to address these limitations, we introduce the concept of cumulative accessibility functions, which measure the reachability of a goal from a given state within a specified horizon.

Continuous Control Motion Planning

Cannot find the paper you are looking for? You can Submit a new open access paper.