no code implementations • 23 Jan 2024 • Daniel Goldfarb, Itay Evron, Nir Weinberger, Daniel Soudry, Paul Hand
Previous works have analyzed separately how forgetting is affected by either task similarity or overparameterization.
no code implementations • 29 Jun 2022 • Jonathan Scarlett, Reinhard Heckel, Miguel R. D. Rodrigues, Paul Hand, Yonina C. Eldar
In recent years, there have been significant advances in the use of deep learning methods in inverse problems such as denoising, compressive sensing, inpainting, and super-resolution.
no code implementations • 1 Jun 2022 • Daniel Goldfarb, Paul Hand
We show experimentally that in permuted MNIST image classification tasks, the generalization performance of multilayer perceptrons trained by vanilla stochastic gradient descent can be improved by overparameterization, and the extent of the performance increase achieved by overparameterization is comparable to that of state-of-the-art continual learning algorithms.
1 code implementation • 8 Mar 2022 • Sean Gunn, Jorio Cocola, Paul Hand
For both of these inversion algorithms, we introduce a new regularized GAN training algorithm and demonstrate that the learned generative model results in lower reconstruction errors across a wide range of under sampling ratios when solving compressed sensing, inpainting, and super-resolution problems.
1 code implementation • NeurIPS 2021 • Max Daniels, Tyler Maunu, Paul Hand
We consider the fundamental problem of sampling the optimal transport coupling between given source and target distributions.
no code implementations • 22 Feb 2021 • Niklas Smedemark-Margulies, Jung Yeon Park, Max Daniels, Rose Yu, Jan-Willem van de Meent, Paul Hand
We introduce a method for achieving low representation error using generators as signal priors.
no code implementations • 31 Oct 2020 • Paul Hand, Oscar Leong, Vladislav Voroninski
We establish local convergence of subgradient descent with optimal sample complexity based on the uniform concentration of a random, discontinuous matrix-valued operator arising from the objective's gradient dynamics.
no code implementations • 24 Aug 2020 • Paul Hand, Oscar Leong, Vladislav Voroninski
Advances in compressive sensing provided reconstruction algorithms of sparse signals from linear measurements with optimal sample complexity, but natural extensions of this methodology to nonlinear inverse problems have been met with potentially fundamental sample complexity bottlenecks.
no code implementations • NeurIPS 2020 • Jorio Cocola, Paul Hand, Vladislav Voroninski
Many problems in statistics and machine learning require the reconstruction of a rank-one signal matrix from noisy data.
no code implementations • 14 Jun 2020 • Jorio Cocola, Paul Hand
Sobolev loss is used when training a network to approximate the values and derivatives of a target function at a prescribed set of input points.
no code implementations • 23 Jan 2020 • Max Daniels, Paul Hand, Reinhard Heckel
In this paper, we demonstrate a method for reducing the representation error of GAN priors by modeling images as the linear combination of a GAN prior with a Deep Decoder.
no code implementations • 25 Sep 2019 • Max Daniels, Reinhard Heckel, Paul Hand
In this paper, we demonstrate a method for removing the representation error of a GAN when used as a prior in inverse problems by modeling images as the linear combination of a GAN with a Deep Decoder.
1 code implementation • NeurIPS 2019 • Paul Hand, Babhru Joshi
That is, the objective function has a descent direction at every point outside of a small neighborhood around four hyperbolic curves.
1 code implementation • 28 May 2019 • Muhammad Asim, Max Daniels, Oscar Leong, Ali Ahmed, Paul Hand
For compressive sensing, invertible priors can yield higher accuracy than sparsity priors across almost all undersampling ratios, and due to their lack of representation error, invertible priors can yield better reconstructions than GAN priors for images that have rare features of variation within the biased training set, including out-of-distribution natural images.
no code implementations • ICLR 2019 • Reinhard Heckel, Wen Huang, Paul Hand, Vladislav Voroninski
Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy image.
4 code implementations • ICLR 2019 • Reinhard Heckel, Paul Hand
In this paper, we propose an untrained simple image model, called the deep decoder, which is a deep neural network that can generate natural images from very few weight parameters.
no code implementations • NeurIPS 2018 • Paul Hand, Oscar Leong, Vladislav Voroninski
Our formulation has provably favorable global geometry for gradient methods, as soon as $m = O(kd^2\log n)$, where $d$ is the depth of the network.
no code implementations • ICLR 2019 • Reinhard Heckel, Wen Huang, Paul Hand, Vladislav Voroninski
Deep neural networks provide state-of-the-art performance for image denoising, where the goal is to recover a near noise-free image from a noisy observation.
no code implementations • 22 May 2017 • Paul Hand, Vladislav Voroninski
We establish that in both cases, in suitable regimes of network layer sizes and a randomness assumption on the network weights, that the non-convex objective function given by empirical risk minimization does not have any spurious stationary points.
no code implementations • 19 Dec 2016 • Paul Hand, Babhru Joshi
We introduce a convex approach for mixed linear regression over $d$ features.
no code implementations • 7 Aug 2016 • Thomas Goldstein, Paul Hand, Choongbum Lee, Vladislav Voroninski, Stefano Soatto
We introduce a new method for location recovery from pair-wise directions that leverages an efficient convex program that comes with exact recovery guarantees, even in the presence of adversarial outliers.
no code implementations • 16 Sep 2015 • Paul Hand, Choongbum Lee, Vladislav Voroninski
This recovery theorem is based on a set of deterministic conditions that we prove are sufficient for exact recovery.
no code implementations • 4 Jun 2015 • Paul Hand, Choongbum Lee, Vladislav Voroninski
We prove that this program recovers a set of $n$ i. i. d.