1 code implementation • 24 Oct 2022 • Matthew Bendel, Rizwan Ahmad, Philip Schniter
In inverse problems, one seeks to reconstruct an image from incomplete and/or degraded measurements.
2 code implementations • 9 Jun 2022 • Saurav K. Shastri, Rizwan Ahmad, Christopher A. Metzler, Philip Schniter
To solve inverse problems, plug-and-play (PnP) methods replace the proximal step in a convex optimization algorithm with a call to an application-specific denoiser, often implemented using a deep neural network (DNN).
no code implementations • NeurIPS Workshop Deep_Invers 2021 • Saurav K Shastri, Rizwan Ahmad, Christopher Metzler, Philip Schniter
To solve inverse problems, plug-and-play (PnP) methods have been developed that replace the proximal step in a convex optimization algorithm with a call to an application-specific denoiser, often implemented using a deep neural network (DNN).
no code implementations • 18 Oct 2021 • Sizhuo Liu, Philip Schniter, Rizwan Ahmad
The proposed method, called recovery with a self-calibrated denoiser (ReSiDe), trains the denoiser from the patches of the image being recovered.
no code implementations • 15 Feb 2021 • Michael Wharton, Anne M. Pavy, Philip Schniter
We consider the problem of classifying radar pulses given raw I/Q waveforms in the presence of noise and absence of synchronization.
1 code implementation • NeurIPS 2020 • Parthe Pandit, Mojtaba Sahraee Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher
In the two-layer neural-network learning problem, this scaling corresponds to the case where the number of input features, as well as training samples, grow to infinity but the number of hidden nodes stays fixed.
no code implementations • 4 Aug 2020 • Rémi Gribonval, Antoine Chatalic, Nicolas Keriven, Vincent Schellekens, Laurent Jacques, Philip Schniter
This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e. g., clustering, classification, or regression) is performed.
no code implementations • 8 Feb 2020 • Sizhuo Liu, Edward Reehorst, Philip Schniter, Rizwan Ahmad
We compare the reconstruction performance of PnP-DL to that of compressed sensing (CS) using eight breath-held and ten real-time (RT) free-breathing cardiac cine datasets.
no code implementations • 26 Jan 2020 • Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher
We consider the problem of inferring the input and hidden variables of a stochastic multi-layer neural network from an observation of the output.
no code implementations • 8 Nov 2019 • Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher
This paper presents a novel algorithm, Multi-Layer Vector Approximate Message Passing (ML-VAMP), for inference in multi-layer stochastic neural networks.
no code implementations • 20 Mar 2019 • Rizwan Ahmad, Charles A. Bouman, Gregery T. Buzzard, Stanley Chan, Sizhou Liu, Edward T. Reehorst, Philip Schniter
In this article, we describe the use of "plug-and-play" (PnP) algorithms for MRI image recovery.
1 code implementation • NeurIPS 2018 • Alyson K. Fletcher, Sundeep Rangan, Subrata Sarkar, Philip Schniter
Estimating a vector $\mathbf{x}$ from noisy linear measurements $\mathbf{Ax}+\mathbf{w}$ often requires use of prior knowledge or structural constraints on $\mathbf{x}$ for accurate reconstruction.
Information Theory Information Theory
1 code implementation • 6 Jun 2018 • Edward T. Reehorst, Philip Schniter
To explain the RED algorithms, we propose a new framework called Score-Matching by Denoising (SMD), which aims to match a "score" (i. e., the gradient of a log-prior).
1 code implementation • ICML 2018 • Christopher A. Metzler, Philip Schniter, Ashok Veeraraghavan, Richard G. Baraniuk
Phase retrieval algorithms have become an important component in many modern computational imaging systems.
no code implementations • NeurIPS 2017 • Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, Sundeep Rangan
We show that the parameter estimates and mean squared error (MSE) of x in each iteration converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations.
no code implementations • 8 Mar 2017 • Maher Al-Shoukairi, Philip Schniter, Bhaskar D. Rao
In this paper, we present an algorithm for the sparse signal recovery problem that incorporates damped Gaussian generalized approximate message passing (GGAMP) into Expectation-Maximization (EM)-based sparse Bayesian learning (SBL).
1 code implementation • 4 Dec 2016 • Mark Borgerding, Philip Schniter, Sundeep Rangan
signals, the linear transforms and scalar nonlinearities prescribed by the VAMP algorithm coincide with the values learned through back-propagation, leading to an intuitive interpretation of learned VAMP.
Information Theory Information Theory
1 code implementation • 4 Nov 2016 • Philip Schniter, Sundeep Rangan, Alyson Fletcher
The denoising-based approximate message passing (D-AMP) methodology, recently proposed by Metzler, Maleki, and Baraniuk, allows one to plug in sophisticated denoisers like BM3D into the AMP algorithm to achieve state-of-the-art compressive image recovery.
Information Theory Information Theory
1 code implementation • 10 Oct 2016 • Sundeep Rangan, Philip Schniter, Alyson K. Fletcher
The approximate message passing (AMP) algorithm recently proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i. i. d.\ sub-Gaussian matrices $\mathbf{A}$, its per-iteration behavior is rigorously characterized by a scalar state-evolution whose fixed points, when unique, are Bayes optimal.
Information Theory Information Theory
no code implementations • 20 Jul 2016 • Mark Borgerding, Philip Schniter
Deep learning has gained great popularity due to its widespread success on many inference problems.
no code implementations • 26 Feb 2016 • Alyson K. Fletcher, Philip Schniter
Like the AMP proposed by Donoho, Maleki, and Montanari in 2009, VAMP is characterized by a rigorous state evolution (SE) that holds under certain large random matrices and that matches the replica prediction of optimality.
no code implementations • 25 Feb 2016 • Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter
Approximations of loopy belief propagation, including expectation propagation and approximate message passing, have attracted considerable attention for probabilistic inference problems.
no code implementations • 15 Sep 2015 • Evan Byrne, Philip Schniter
For the problem of multi-class linear classification and feature selection, we propose approximate message passing approaches to sparse multinomial logistic regression (MLR).
no code implementations • 5 Jan 2014 • Justin Ziniel, Philip Schniter, Per Sederberg
For the problem of binary linear classification and feature selection, we propose algorithmic approaches to classifier design based on the generalized approximate message passing (GAMP) algorithm, recently proposed in the context of compressive sensing.
no code implementations • 7 Jun 2013 • Marcel Nassar, Philip Schniter, Brian L. Evans
We propose a novel receiver for orthogonal frequency division multiplexing (OFDM) transmissions in impulsive noise environments.