Search Results for author: Alyson K. Fletcher

Found 25 papers, 5 papers with code

Instability and Local Minima in GAN Training with Kernel Discriminators

no code implementations21 Aug 2022 Evan Becker, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

Generative Adversarial Networks (GANs) are a widely-used tool for generative modeling of complex data.

Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High Dimensions

no code implementations20 Jan 2022 Mojtaba Sahraee-Ardakan, Melikasadat Emami, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

Empirical observation of high dimensional phenomena, such as the double descent behaviour, has attracted a lot of interest in understanding classical techniques such as kernel methods, and their implications to explain generalization properties of neural networks.

Asymptotics of Ridge Regression in Convolutional Models

no code implementations8 Mar 2021 Mojtaba Sahraee-Ardakan, Tung Mai, Anup Rao, Ryan Rossi, Sundeep Rangan, Alyson K. Fletcher

We show the double descent phenomenon in our experiments for convolutional models and show that our theoretical results match the experiments.

regression

Implicit Bias of Linear RNNs

no code implementations19 Jan 2021 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

The degree of this bias depends on the variance of the transition kernel matrix at initialization and is related to the classic exploding and vanishing gradients problem.

Matrix Inference and Estimation in Multi-Layer Models

1 code implementation NeurIPS 2020 Parthe Pandit, Mojtaba Sahraee Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

In the two-layer neural-network learning problem, this scaling corresponds to the case where the number of input features, as well as training samples, grow to infinity but the number of hidden nodes stays fixed.

Imputation

Low-Rank Nonlinear Decoding of $μ$-ECoG from the Primary Auditory Cortex

no code implementations6 May 2020 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Alyson K. Fletcher, Sundeep Rangan, Michael Trumpis, Brinnae Bent, Chia-Han Chiang, Jonathan Viventi

This decoding problem is particularly challenging due to the complexity of neural responses in the auditory cortex and the presence of confounding signals in awake animals.

Dimensionality Reduction

Generalization Error of Generalized Linear Models in High Dimensions

3 code implementations ICML 2020 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

We provide a general framework to characterize the asymptotic generalization error for single-layer neural networks (i. e., generalized linear models) with arbitrary non-linearities, making it applicable to regression as well as classification problems.

BIG-bench Machine Learning regression +1

Inference in Multi-Layer Networks with Matrix-Valued Unknowns

no code implementations26 Jan 2020 Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

We consider the problem of inferring the input and hidden variables of a stochastic multi-layer neural network from an observation of the output.

Input-Output Equivalence of Unitary and Contractive RNNs

1 code implementation NeurIPS 2019 Melikasadat Emami, Mojtaba Sahraee Ardakan, Sundeep Rangan, Alyson K. Fletcher

Unitary recurrent neural networks (URNNs) have been proposed as a method to overcome the vanishing and exploding gradient problem in modeling data with long-term dependencies.

Inference with Deep Generative Priors in High Dimensions

no code implementations8 Nov 2019 Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

This paper presents a novel algorithm, Multi-Layer Vector Approximate Message Passing (ML-VAMP), for inference in multi-layer stochastic neural networks.

Vocal Bursts Intensity Prediction

High-Dimensional Bernoulli Autoregressive Process with Long-Range Dependence

no code implementations19 Mar 2019 Parthe Pandit, Mojtaba Sahraee-Ardakan, Arash A. Amini, Sundeep Rangan, Alyson K. Fletcher

We derive precise upper bounds on the mean-squared estimation error in terms of the number of samples, dimensions of the process, the lag $p$ and other key statistical properties of the model.

Gaussian Processes Vocal Bursts Intensity Prediction

Asymptotics of MAP Inference in Deep Networks

no code implementations1 Mar 2019 Parthe Pandit, Mojtaba Sahraee, Sundeep Rangan, Alyson K. Fletcher

Deep generative priors are a powerful tool for reconstruction problems with complex data such as images and text.

Plug-in Estimation in High-Dimensional Linear Inverse Problems: A Rigorous Analysis

1 code implementation NeurIPS 2018 Alyson K. Fletcher, Sundeep Rangan, Subrata Sarkar, Philip Schniter

Estimating a vector $\mathbf{x}$ from noisy linear measurements $\mathbf{Ax}+\mathbf{w}$ often requires use of prior knowledge or structural constraints on $\mathbf{x}$ for accurate reconstruction.

Information Theory Information Theory

Inference in Deep Networks in High Dimensions

no code implementations20 Jun 2017 Alyson K. Fletcher, Sundeep Rangan

In inverse problems that use these networks as generative priors on data, one must often perform inference of the inputs of the networks from the outputs.

Vocal Bursts Intensity Prediction

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

no code implementations NeurIPS 2017 Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, Sundeep Rangan

We show that the parameter estimates and mean squared error (MSE) of x in each iteration converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations.

Vector Approximate Message Passing

1 code implementation10 Oct 2016 Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

The approximate message passing (AMP) algorithm recently proposed by Donoho, Maleki, and Montanari is a computationally efficient iterative approach to SLR that has a remarkable property: for large i. i. d.\ sub-Gaussian matrices $\mathbf{A}$, its per-iteration behavior is rigorously characterized by a scalar state-evolution whose fixed points, when unique, are Bayes optimal.

Information Theory Information Theory

Learning and Free Energies for Vector Approximate Message Passing

no code implementations26 Feb 2016 Alyson K. Fletcher, Philip Schniter

Like the AMP proposed by Donoho, Maleki, and Montanari in 2009, VAMP is characterized by a rigorous state evolution (SE) that holds under certain large random matrices and that matches the replica prediction of optimality.

Expectation Consistent Approximate Inference: Generalizations and Convergence

no code implementations25 Feb 2016 Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter

Approximations of loopy belief propagation, including expectation propagation and approximate message passing, have attracted considerable attention for probabilistic inference problems.

Scalable Inference for Neuronal Connectivity from Calcium Imaging

no code implementations NeurIPS 2014 Alyson K. Fletcher, Sundeep Rangan

In this work, we propose a computationally fast method for the state estimation based on a hybrid of loopy belief propagation and approximate message passing (AMP).

Bayesian Inference

Approximate Message Passing with Consistent Parameter Estimation and Applications to Sparse Learning

no code implementations NeurIPS 2012 Ulugbek Kamilov, Sundeep Rangan, Michael Unser, Alyson K. Fletcher

We present a method, called adaptive generalized approximate message passing (Adaptive GAMP), that enables joint learning of the statistics of the prior and measurement channel along with estimation of the unknown vector $\xbf$.

Sparse Learning

Neural Reconstruction with Approximate Message Passing (NeuRAMP)

no code implementations NeurIPS 2011 Alyson K. Fletcher, Sundeep Rangan, Lav R. Varshney, Aniruddha Bhargava

Many functional descriptions of spiking neurons assume a cascade structure where inputs are passed through an initial linear filtering stage that produces a low-dimensional signal that drives subsequent nonlinear stages.

Orthogonal Matching Pursuit From Noisy Random Measurements: A New Analysis

no code implementations NeurIPS 2009 Sundeep Rangan, Alyson K. Fletcher

Orthogonal matching pursuit (OMP) is a widely used greedy algorithm for recovering sparse vectors from linear measurements.

2k 4k

Asymptotic Analysis of MAP Estimation via the Replica Method and Compressed Sensing

no code implementations NeurIPS 2009 Sundeep Rangan, Vivek Goyal, Alyson K. Fletcher

It is shown that with large random linear measurements and Gaussian noise, the asymptotic behavior of the MAP estimate of an n-dimensional vector ``decouples as n scalar MAP estimators.

Cannot find the paper you are looking for? You can Submit a new open access paper.