no code implementations • 14 May 2023 • Evan Becker, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher
We study the local dynamics of GDA for training a GAN with a kernel-based discriminator.
no code implementations • 21 Aug 2022 • Evan Becker, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher
Generative Adversarial Networks (GANs) are a widely-used tool for generative modeling of complex data.
no code implementations • 20 Jan 2022 • Mojtaba Sahraee-Ardakan, Melikasadat Emami, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher
Empirical observation of high dimensional phenomena, such as the double descent behaviour, has attracted a lot of interest in understanding classical techniques such as kernel methods, and their implications to explain generalization properties of neural networks.
no code implementations • 8 Mar 2021 • Mojtaba Sahraee-Ardakan, Tung Mai, Anup Rao, Ryan Rossi, Sundeep Rangan, Alyson K. Fletcher
We show the double descent phenomenon in our experiments for convolutional models and show that our theoretical results match the experiments.
no code implementations • 19 Jan 2021 • Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher
The degree of this bias depends on the variance of the transition kernel matrix at initialization and is related to the classic exploding and vanishing gradients problem.
1 code implementation • NeurIPS 2020 • Parthe Pandit, Mojtaba Sahraee Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher
In the two-layer neural-network learning problem, this scaling corresponds to the case where the number of input features, as well as training samples, grow to infinity but the number of hidden nodes stays fixed.
no code implementations • 6 May 2020 • Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Alyson K. Fletcher, Sundeep Rangan, Michael Trumpis, Brinnae Bent, Chia-Han Chiang, Jonathan Viventi
This decoding problem is particularly challenging due to the complexity of neural responses in the auditory cortex and the presence of confounding signals in awake animals.
3 code implementations • ICML 2020 • Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher
We provide a general framework to characterize the asymptotic generalization error for single-layer neural networks (i. e., generalized linear models) with arbitrary non-linearities, making it applicable to regression as well as classification problems.
no code implementations • 26 Jan 2020 • Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher
We consider the problem of inferring the input and hidden variables of a stochastic multi-layer neural network from an observation of the output.
1 code implementation • NeurIPS 2019 • Melikasadat Emami, Mojtaba Sahraee Ardakan, Sundeep Rangan, Alyson K. Fletcher
Unitary recurrent neural networks (URNNs) have been proposed as a method to overcome the vanishing and exploding gradient problem in modeling data with long-term dependencies.
no code implementations • 8 Nov 2019 • Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher
This paper presents a novel algorithm, Multi-Layer Vector Approximate Message Passing (ML-VAMP), for inference in multi-layer stochastic neural networks.
no code implementations • 19 Mar 2019 • Parthe Pandit, Mojtaba Sahraee-Ardakan, Arash A. Amini, Sundeep Rangan, Alyson K. Fletcher
We derive precise upper bounds on the mean-squared estimation error in terms of the number of samples, dimensions of the process, the lag $p$ and other key statistical properties of the model.
no code implementations • 1 Mar 2019 • Parthe Pandit, Mojtaba Sahraee, Sundeep Rangan, Alyson K. Fletcher
Deep generative priors are a powerful tool for reconstruction problems with complex data such as images and text.
1 code implementation • NeurIPS 2018 • Alyson K. Fletcher, Sundeep Rangan, Subrata Sarkar, Philip Schniter
Estimating a vector $\mathbf{x}$ from noisy linear measurements $\mathbf{Ax}+\mathbf{w}$ often requires use of prior knowledge or structural constraints on $\mathbf{x}$ for accurate reconstruction.
Information Theory Information Theory
no code implementations • 20 Jun 2017 • Alyson K. Fletcher, Sundeep Rangan
In inverse problems that use these networks as generative priors on data, one must often perform inference of the inputs of the networks from the outputs.
no code implementations • NeurIPS 2017 • Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, Sundeep Rangan
We show that the parameter estimates and mean squared error (MSE) of x in each iteration converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations.
1 code implementation • 10 Oct 2016 • Sundeep Rangan, Philip Schniter, Alyson K. Fletcher
The approximate message passing (AMP) algorithm recently proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i. i. d.\ sub-Gaussian matrices $\mathbf{A}$, its per-iteration behavior is rigorously characterized by a scalar state-evolution whose fixed points, when unique, are Bayes optimal.
Information Theory Information Theory
no code implementations • 26 Feb 2016 • Alyson K. Fletcher, Philip Schniter
Like the AMP proposed by Donoho, Maleki, and Montanari in 2009, VAMP is characterized by a rigorous state evolution (SE) that holds under certain large random matrices and that matches the replica prediction of optimality.
no code implementations • 25 Feb 2016 • Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter
Approximations of loopy belief propagation, including expectation propagation and approximate message passing, have attracted considerable attention for probabilistic inference problems.
no code implementations • NeurIPS 2014 • Alyson K. Fletcher, Sundeep Rangan
In this work, we propose a computationally fast method for the state estimation based on a hybrid of loopy belief propagation and approximate message passing (AMP).
no code implementations • NeurIPS 2012 • Ulugbek Kamilov, Sundeep Rangan, Michael Unser, Alyson K. Fletcher
We present a method, called adaptive generalized approximate message passing (Adaptive GAMP), that enables joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector $\xbf$.
no code implementations • NeurIPS 2011 • Alyson K. Fletcher, Sundeep Rangan, Lav R. Varshney, Aniruddha Bhargava
Many functional descriptions of spiking neurons assume a cascade structure where inputs are passed through an initial linear filtering stage that produces a low-dimensional signal that drives subsequent nonlinear stages.
no code implementations • NeurIPS 2009 • Sundeep Rangan, Alyson K. Fletcher
Orthogonal matching pursuit (OMP) is a widely used greedy algorithm for recovering sparse vectors from linear measurements.
no code implementations • NeurIPS 2009 • Sundeep Rangan, Vivek Goyal, Alyson K. Fletcher
It is shown that with large random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of an n-dimensional vector ``decouples as n scalar MAP estimators.
no code implementations • NeurIPS 2008 • Sundeep Rangan, Vivek Goyal, Alyson K. Fletcher
Recent research suggests that neural systems employ sparse coding.