Search Results for author: Gabriel Loaiza-Ganem

Found 23 papers, 17 papers with code

Deep Generative Models through the Lens of the Manifold Hypothesis: A Survey and New Connections

no code implementations3 Apr 2024 Gabriel Loaiza-Ganem, Brendan Leigh Ross, Rasa Hosseinzadeh, Anthony L. Caterini, Jesse C. Cresswell

This manifold lens provides both clarity as to why some DGMs (e. g. diffusion models and some generative adversarial networks) empirically surpass others (e. g. likelihood-based models such as variational autoencoders, normalizing flows, or energy-based models) at sample generation, and guidance for devising more performant DGMs.

A Geometric Explanation of the Likelihood OOD Detection Paradox

1 code implementation27 Mar 2024 Hamidreza Kamkari, Brendan Leigh Ross, Jesse C. Cresswell, Anthony L. Caterini, Rahul G. Krishnan, Gabriel Loaiza-Ganem

We also show that this scenario can be identified through local intrinsic dimension (LID) estimation, and propose a method for OOD detection which pairs the likelihoods and LID estimates obtained from a pre-trained DGM.

Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models

2 code implementations NeurIPS 2023 George Stein, Jesse C. Cresswell, Rasa Hosseinzadeh, Yi Sui, Brendan Leigh Ross, Valentin Villecroze, Zhaoyan Liu, Anthony L. Caterini, J. Eric T. Taylor, Gabriel Loaiza-Ganem

Comparing to 17 modern metrics for evaluating the overall performance, fidelity, diversity, rarity, and memorization of generative models, we find that the state-of-the-art perceptual realism of diffusion models as judged by humans is not reflected in commonly reported metrics such as FID.

Memorization

TR0N: Translator Networks for 0-Shot Plug-and-Play Conditional Generation

2 code implementations26 Apr 2023 Zhaoyan Liu, Noel Vouitsis, Satya Krishna Gorti, Jimmy Ba, Gabriel Loaiza-Ganem

We propose TR0N, a highly general framework to turn pre-trained unconditional generative models, such as GANs and VAEs, into conditional models.

Text-to-Image Generation

Denoising Deep Generative Models

1 code implementation30 Nov 2022 Gabriel Loaiza-Ganem, Brendan Leigh Ross, Luhuan Wu, John P. Cunningham, Jesse C. Cresswell, Anthony L. Caterini

Likelihood-based deep generative models have recently been shown to exhibit pathological behaviour under the manifold hypothesis as a consequence of using high-dimensional densities to model data with low-dimensional structure.

Denoising

Relating Regularization and Generalization through the Intrinsic Dimension of Activations

no code implementations23 Nov 2022 Bradley C. A. Brown, Jordan Juravsky, Anthony L. Caterini, Gabriel Loaiza-Ganem

Given a pair of models with similar training set performance, it is natural to assume that the model that possesses simpler internal representations would exhibit better generalization.

Image Classification

CaloMan: Fast generation of calorimeter showers with density estimation on learned manifolds

no code implementations23 Nov 2022 Jesse C. Cresswell, Brendan Leigh Ross, Gabriel Loaiza-Ganem, Humberto Reyes-Gonzalez, Marco Letizia, Anthony L. Caterini

Precision measurements and new physics searches at the Large Hadron Collider require efficient simulations of particle propagation and interactions within the detectors.

Density Estimation

Verifying the Union of Manifolds Hypothesis for Image Data

1 code implementation6 Jul 2022 Bradley C. A. Brown, Anthony L. Caterini, Brendan Leigh Ross, Jesse C. Cresswell, Gabriel Loaiza-Ganem

Assuming that data lies on a single manifold implies intrinsic dimension is identical across the entire data space, and does not allow for subregions of this space to have a different number of factors of variation.

Inductive Bias

Neural Implicit Manifold Learning for Topology-Aware Density Estimation

1 code implementation22 Jun 2022 Brendan Leigh Ross, Gabriel Loaiza-Ganem, Anthony L. Caterini, Jesse C. Cresswell

We then learn the probability density within $\mathcal{M}$ with a constrained energy-based model, which employs a constrained variant of Langevin dynamics to train and sample from the learned manifold.

Density Estimation

On the Normalizing Constant of the Continuous Categorical Distribution

2 code implementations28 Apr 2022 Elliott Gordon-Rodriguez, Gabriel Loaiza-Ganem, Andres Potapczynski, John P. Cunningham

This family enjoys remarkable mathematical simplicity; its density function resembles that of the Dirichlet distribution, but with a normalizing constant that can be written in closed form using elementary functions only.

Diagnosing and Fixing Manifold Overfitting in Deep Generative Models

2 code implementations14 Apr 2022 Gabriel Loaiza-Ganem, Brendan Leigh Ross, Jesse C. Cresswell, Anthony L. Caterini

We propose a class of two-step procedures consisting of a dimensionality reduction step followed by maximum-likelihood density estimation, and prove that they recover the data-generating distribution in the nonparametric regime, thus avoiding manifold overfitting.

Density Estimation Dimensionality Reduction

Bayesian Nonparametrics for Offline Skill Discovery

1 code implementation9 Feb 2022 Valentin Villecroze, Harry J. Braviner, Panteha Naderian, Chris J. Maddison, Gabriel Loaiza-Ganem

Skills or low-level policies in reinforcement learning are temporally extended actions that can speed up learning and enable complex behaviours.

Imitation Learning reinforcement-learning +2

Entropic Issues in Likelihood-Based OOD Detection

no code implementations NeurIPS Workshop ICBINB 2021 Anthony L. Caterini, Gabriel Loaiza-Ganem

This analysis provides further explanation for the success of OOD detection methods based on likelihood ratios, as the problematic entropy term cancels out in expectation.

Out of Distribution (OOD) Detection

Rectangular Flows for Manifold Learning

1 code implementation NeurIPS 2021 Anthony L. Caterini, Gabriel Loaiza-Ganem, Geoff Pleiss, John P. Cunningham

Normalizing flows are invertible neural networks with tractable change-of-volume terms, which allow optimization of their parameters to be efficiently performed via maximum likelihood.

Density Estimation Out-of-Distribution Detection

C-Learning: Horizon-Aware Cumulative Accessibility Estimation

1 code implementation ICLR 2021 Panteha Naderian, Gabriel Loaiza-Ganem, Harry J. Braviner, Anthony L. Caterini, Jesse C. Cresswell, Tong Li, Animesh Garg

In order to address these limitations, we introduce the concept of cumulative accessibility functions, which measure the reachability of a goal from a given state within a specified horizon.

Continuous Control Motion Planning

Uses and Abuses of the Cross-Entropy Loss: Case Studies in Modern Deep Learning

2 code implementations NeurIPS Workshop ICBINB 2020 Elliott Gordon-Rodriguez, Gabriel Loaiza-Ganem, Geoff Pleiss, John P. Cunningham

Modern deep learning is primarily an experimental science, in which empirical advances occasionally come at the expense of probabilistic rigor.

The continuous categorical: a novel simplex-valued exponential family

2 code implementations ICML 2020 Elliott Gordon-Rodriguez, Gabriel Loaiza-Ganem, John P. Cunningham

Simplex-valued data appear throughout statistics and machine learning, for example in the context of transfer learning and compression of deep networks.

Neural Network Compression Transfer Learning

Invertible Gaussian Reparameterization: Revisiting the Gumbel-Softmax

1 code implementation NeurIPS 2020 Andres Potapczynski, Gabriel Loaiza-Ganem, John P. Cunningham

The Gumbel-Softmax is a continuous distribution over the simplex that is often used as a relaxation of discrete distributions.

The continuous Bernoulli: fixing a pervasive error in variational autoencoders

2 code implementations NeurIPS 2019 Gabriel Loaiza-Ganem, John P. Cunningham

Variational autoencoders (VAE) have quickly become a central tool in machine learning, applicable to a broad range of data types and latent variable models.

Deep Random Splines for Point Process Intensity Estimation

no code implementations ICLR Workshop DeepGenStruct 2019 Gabriel Loaiza-Ganem, John P. Cunningham

Gaussian processes are the leading class of distributions on random functions, but they suffer from well known issues including difficulty scaling and inflexibility with respect to certain shape constraints (such as nonnegativity).

Gaussian Processes Point Processes

Deep Random Splines for Point Process Intensity Estimation of Neural Population Data

2 code implementations NeurIPS 2019 Gabriel Loaiza-Ganem, Sean M. Perkins, Karen E. Schroeder, Mark M. Churchland, John P. Cunningham

Gaussian processes are the leading class of distributions on random functions, but they suffer from well known issues including difficulty scaling and inflexibility with respect to certain shape constraints (such as nonnegativity).

Dimensionality Reduction Gaussian Processes +1

Maximum Entropy Flow Networks

no code implementations12 Jan 2017 Gabriel Loaiza-Ganem, Yuanjun Gao, John P. Cunningham

Maximum entropy modeling is a flexible and popular framework for formulating statistical models given partial knowledge.

Stochastic Optimization

Cannot find the paper you are looking for? You can Submit a new open access paper.