Search Results for author: Paul Hand

Found 23 papers, 5 papers with code

Theoretical Perspectives on Deep Learning Methods in Inverse Problems

no code implementations29 Jun 2022 Jonathan Scarlett, Reinhard Heckel, Miguel R. D. Rodrigues, Paul Hand, Yonina C. Eldar

In recent years, there have been significant advances in the use of deep learning methods in inverse problems such as denoising, compressive sensing, inpainting, and super-resolution.

Compressive Sensing Deep Learning +2

Analysis of Catastrophic Forgetting for Random Orthogonal Transformation Tasks in the Overparameterized Regime

no code implementations1 Jun 2022 Daniel Goldfarb, Paul Hand

We show experimentally that in permuted MNIST image classification tasks, the generalization performance of multilayer perceptrons trained by vanilla stochastic gradient descent can be improved by overparameterization, and the extent of the performance increase achieved by overparameterization is comparable to that of state-of-the-art continual learning algorithms.

Continual Learning Image Classification +1

Regularized Training of Intermediate Layers for Generative Models for Inverse Problems

1 code implementation8 Mar 2022 Sean Gunn, Jorio Cocola, Paul Hand

For both of these inversion algorithms, we introduce a new regularized GAN training algorithm and demonstrate that the learned generative model results in lower reconstruction errors across a wide range of under sampling ratios when solving compressed sensing, inpainting, and super-resolution problems.

Super-Resolution

Score-based Generative Neural Networks for Large-Scale Optimal Transport

1 code implementation NeurIPS 2021 Max Daniels, Tyler Maunu, Paul Hand

We consider the fundamental problem of sampling the optimal transport coupling between given source and target distributions.

Generator Surgery for Compressed Sensing

no code implementations22 Feb 2021 Niklas Smedemark-Margulies, Jung Yeon Park, Max Daniels, Rose Yu, Jan-Willem van de Meent, Paul Hand

We introduce a method for achieving low representation error using generators as signal priors.

Optimal Sample Complexity of Subgradient Descent for Amplitude Flow via Non-Lipschitz Matrix Concentration

no code implementations31 Oct 2020 Paul Hand, Oscar Leong, Vladislav Voroninski

We establish local convergence of subgradient descent with optimal sample complexity based on the uniform concentration of a random, discontinuous matrix-valued operator arising from the objective's gradient dynamics.

Compressive Phase Retrieval: Optimal Sample Complexity with Deep Generative Priors

no code implementations24 Aug 2020 Paul Hand, Oscar Leong, Vladislav Voroninski

Advances in compressive sensing provided reconstruction algorithms of sparse signals from linear measurements with optimal sample complexity, but natural extensions of this methodology to nonlinear inverse problems have been met with potentially fundamental sample complexity bottlenecks.

Compressive Sensing Retrieval

Nonasymptotic Guarantees for Spiked Matrix Recovery with Generative Priors

no code implementations NeurIPS 2020 Jorio Cocola, Paul Hand, Vladislav Voroninski

Many problems in statistics and machine learning require the reconstruction of a rank-one signal matrix from noisy data.

Global Convergence of Sobolev Training for Overparameterized Neural Networks

no code implementations14 Jun 2020 Jorio Cocola, Paul Hand

Sobolev loss is used when training a network to approximate the values and derivatives of a target function at a prescribed set of input points.

Reducing the Representation Error of GAN Image Priors Using the Deep Decoder

no code implementations23 Jan 2020 Max Daniels, Paul Hand, Reinhard Heckel

In this paper, we demonstrate a method for reducing the representation error of GAN priors by modeling images as the linear combination of a GAN prior with a Deep Decoder.

Compressive Sensing Decoder +1

Removing the Representation Error of GAN Image Priors Using the Deep Decoder

no code implementations25 Sep 2019 Max Daniels, Reinhard Heckel, Paul Hand

In this paper, we demonstrate a method for removing the representation error of a GAN when used as a prior in inverse problems by modeling images as the linear combination of a GAN with a Deep Decoder.

Compressive Sensing Decoder +1

Global Guarantees for Blind Demodulation with Generative Priors

1 code implementation NeurIPS 2019 Paul Hand, Babhru Joshi

That is, the objective function has a descent direction at every point outside of a small neighborhood around four hyperbolic curves.

Invertible generative models for inverse problems: mitigating representation error and dataset bias

1 code implementation28 May 2019 Muhammad Asim, Max Daniels, Oscar Leong, Ali Ahmed, Paul Hand

For compressive sensing, invertible priors can yield higher accuracy than sparsity priors across almost all undersampling ratios, and due to their lack of representation error, invertible priors can yield better reconstructions than GAN priors for images that have rare features of variation within the biased training set, including out-of-distribution natural images.

Compressive Sensing Decoder +2

Deep Denoising: Rate-Optimal Recovery of Structured Signals with a Deep Prior

no code implementations ICLR 2019 Reinhard Heckel, Wen Huang, Paul Hand, Vladislav Voroninski

Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy image.

Image Denoising

Deep Decoder: Concise Image Representations from Untrained Non-convolutional Networks

4 code implementations ICLR 2019 Reinhard Heckel, Paul Hand

In this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters.

Decoder Denoising

Phase Retrieval Under a Generative Prior

no code implementations NeurIPS 2018 Paul Hand, Oscar Leong, Vladislav Voroninski

Our formulation has provably favorable global geometry for gradient methods, as soon as $m = O(kd^2\log n)$, where $d$ is the depth of the network.

Retrieval

Rate-Optimal Denoising with Deep Neural Networks

no code implementations ICLR 2019 Reinhard Heckel, Wen Huang, Paul Hand, Vladislav Voroninski

Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy observation.

Image Denoising

Global Guarantees for Enforcing Deep Generative Priors by Empirical Risk

no code implementations22 May 2017 Paul Hand, Vladislav Voroninski

We establish that in both cases, in suitable regimes of network layer sizes and a randomness assumption on the network weights, that the non-convex objective function given by empirical risk minimization does not have any spurious stationary points.

ShapeFit and ShapeKick for Robust, Scalable Structure from Motion

no code implementations7 Aug 2016 Thomas Goldstein, Paul Hand, Choongbum Lee, Vladislav Voroninski, Stefano Soatto

We introduce a new method for location recovery from pair-wise directions that leverages an efficient convex program that comes with exact recovery guarantees, even in the presence of adversarial outliers.

Exact simultaneous recovery of locations and structure from known orientations and corrupted point correspondences

no code implementations16 Sep 2015 Paul Hand, Choongbum Lee, Vladislav Voroninski

This recovery theorem is based on a set of deterministic conditions that we prove are sufficient for exact recovery.

Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.