Search Results for author: Philip Schniter

Found 27 papers, 11 papers with code

Surface Coil Intensity Correction for MRI

1 code implementation1 Dec 2023 Xuan Lei, Philip Schniter, Chong Chen, Rizwan Ahmad

Modern MRI scanners utilize one or more arrays of small receive-only coils to collect k-space data.

A Conditional Normalizing Flow for Accelerated Multi-Coil MR Imaging

1 code implementation2 Jun 2023 Jeffrey Wen, Rizwan Ahmad, Philip Schniter

Accelerated magnetic resonance (MR) imaging attempts to reduce acquisition time by collecting data below the Nyquist rate.

MRI Recovery with Self-Calibrated Denoisers without Fully-Sampled Data

2 code implementations25 Apr 2023 Sizhuo Liu, Muhammad Shafique, Philip Schniter, Rizwan Ahmad

However, unlike traditional PnP approaches that utilize generic denoisers or train application-specific denoisers using high-quality images or image patches, ReSiDe directly trains the denoiser on the image or images that are being reconstructed from the undersampled data.

Denoising MRI Reconstruction

Denoising Generalized Expectation-Consistent Approximation for MR Image Recovery

2 code implementations9 Jun 2022 Saurav K. Shastri, Rizwan Ahmad, Christopher A. Metzler, Philip Schniter

To solve inverse problems, plug-and-play (PnP) methods replace the proximal step in a convex optimization algorithm with a call to an application-specific denoiser, often implemented using a deep neural network (DNN).

Denoising

Matching Plug-and-Play Algorithms to the Denoiser

no code implementations NeurIPS Workshop Deep_Invers 2021 Saurav K Shastri, Rizwan Ahmad, Christopher Metzler, Philip Schniter

To solve inverse problems, plug-and-play (PnP) methods have been developed that replace the proximal step in a convex optimization algorithm with a call to an application-specific denoiser, often implemented using a deep neural network (DNN).

MRI Recovery with A Self-calibrated Denoiser

no code implementations18 Oct 2021 Sizhuo Liu, Philip Schniter, Rizwan Ahmad

The proposed method, called recovery with a self-calibrated denoiser (ReSiDe), trains the denoiser from the patches of the image being recovered.

Denoising MRI Reconstruction

Deep Neural Networks for Radar Waveform Classification

no code implementations15 Feb 2021 Michael Wharton, Anne M. Pavy, Philip Schniter

We consider the problem of classifying radar pulses given raw I/Q waveforms in the presence of noise and absence of synchronization.

Classification General Classification

Matrix Inference and Estimation in Multi-Layer Models

1 code implementation NeurIPS 2020 Parthe Pandit, Mojtaba Sahraee Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

In the two-layer neural-network learning problem, this scaling corresponds to the case where the number of input features, as well as training samples, grow to infinity but the number of hidden nodes stays fixed.

Imputation

Sketching Datasets for Large-Scale Learning (long version)

no code implementations4 Aug 2020 Rémi Gribonval, Antoine Chatalic, Nicolas Keriven, Vincent Schellekens, Laurent Jacques, Philip Schniter

This article considers "compressive learning," an approach to large-scale machine learning where datasets are massively compressed before learning (e. g., clustering, classification, or regression) is performed.

BIG-bench Machine Learning Clustering +1

Free-breathing Cardiovascular MRI Using a Plug-and-Play Method with Learned Denoiser

no code implementations8 Feb 2020 Sizhuo Liu, Edward Reehorst, Philip Schniter, Rizwan Ahmad

We compare the reconstruction performance of PnP-DL to that of compressed sensing (CS) using eight breath-held and ten real-time (RT) free-breathing cardiac cine datasets.

Denoising

Inference in Multi-Layer Networks with Matrix-Valued Unknowns

no code implementations26 Jan 2020 Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

We consider the problem of inferring the input and hidden variables of a stochastic multi-layer neural network from an observation of the output.

Inference with Deep Generative Priors in High Dimensions

no code implementations8 Nov 2019 Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

This paper presents a novel algorithm, Multi-Layer Vector Approximate Message Passing (ML-VAMP), for inference in multi-layer stochastic neural networks.

Vocal Bursts Intensity Prediction

Plug-in Estimation in High-Dimensional Linear Inverse Problems: A Rigorous Analysis

1 code implementation NeurIPS 2018 Alyson K. Fletcher, Sundeep Rangan, Subrata Sarkar, Philip Schniter

Estimating a vector $\mathbf{x}$ from noisy linear measurements $\mathbf{Ax}+\mathbf{w}$ often requires use of prior knowledge or structural constraints on $\mathbf{x}$ for accurate reconstruction.

Information Theory Information Theory

Regularization by Denoising: Clarifications and New Interpretations

1 code implementation6 Jun 2018 Edward T. Reehorst, Philip Schniter

To explain the RED algorithms, we propose a new framework called Score-Matching by Denoising (SMD), which aims to match a "score" (i. e., the gradient of a log-prior).

Density Estimation Image Denoising

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

no code implementations NeurIPS 2017 Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, Sundeep Rangan

We show that the parameter estimates and mean squared error (MSE) of x in each iteration converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations.

A GAMP Based Low Complexity Sparse Bayesian Learning Algorithm

no code implementations8 Mar 2017 Maher Al-Shoukairi, Philip Schniter, Bhaskar D. Rao

In this paper, we present an algorithm for the sparse signal recovery problem that incorporates damped Gaussian generalized approximate message passing (GGAMP) into Expectation-Maximization (EM)-based sparse Bayesian learning (SBL).

AMP-Inspired Deep Networks for Sparse Linear Inverse Problems

1 code implementation4 Dec 2016 Mark Borgerding, Philip Schniter, Sundeep Rangan

signals, the linear transforms and scalar nonlinearities prescribed by the VAMP algorithm coincide with the values learned through back-propagation, leading to an intuitive interpretation of learned VAMP.

Information Theory Information Theory

Denoising based Vector Approximate Message Passing

1 code implementation4 Nov 2016 Philip Schniter, Sundeep Rangan, Alyson Fletcher

The denoising-based approximate message passing (D-AMP) methodology, recently proposed by Metzler, Maleki, and Baraniuk, allows one to plug in sophisticated denoisers like BM3D into the AMP algorithm to achieve state-of-the-art compressive image recovery.

Information Theory Information Theory

Vector Approximate Message Passing

1 code implementation10 Oct 2016 Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

The approximate message passing (AMP) algorithm recently proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i. i. d.\ sub-Gaussian matrices $\mathbf{A}$, its per-iteration behavior is rigorously characterized by a scalar state-evolution whose fixed points, when unique, are Bayes optimal.

Information Theory Information Theory

Onsager-corrected deep learning for sparse linear inverse problems

no code implementations20 Jul 2016 Mark Borgerding, Philip Schniter

Deep learning has gained great popularity due to its widespread success on many inference problems.

Compressive Sensing

Learning and Free Energies for Vector Approximate Message Passing

no code implementations26 Feb 2016 Alyson K. Fletcher, Philip Schniter

Like the AMP proposed by Donoho, Maleki, and Montanari in 2009, VAMP is characterized by a rigorous state evolution (SE) that holds under certain large random matrices and that matches the replica prediction of optimality.

Expectation Consistent Approximate Inference: Generalizations and Convergence

no code implementations25 Feb 2016 Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter

Approximations of loopy belief propagation, including expectation propagation and approximate message passing, have attracted considerable attention for probabilistic inference problems.

Sparse Multinomial Logistic Regression via Approximate Message Passing

no code implementations15 Sep 2015 Evan Byrne, Philip Schniter

For the problem of multi-class linear classification and feature selection, we propose approximate message passing approaches to sparse multinomial logistic regression (MLR).

feature selection General Classification +1

Binary Linear Classification and Feature Selection via Generalized Approximate Message Passing

no code implementations5 Jan 2014 Justin Ziniel, Philip Schniter, Per Sederberg

For the problem of binary linear classification and feature selection, we propose algorithmic approaches to classifier design based on the generalized approximate message passing (GAMP) algorithm, recently proposed in the context of compressive sensing.

Classification Compressive Sensing +2

A Factor Graph Approach to Joint OFDM Channel Estimation and Decoding in Impulsive Noise Environments

no code implementations7 Jun 2013 Marcel Nassar, Philip Schniter, Brian L. Evans

We propose a novel receiver for orthogonal frequency division multiplexing (OFDM) transmissions in impulsive noise environments.

Cannot find the paper you are looking for? You can Submit a new open access paper.