Search Results for author: Mojtaba Sahraee-Ardakan

Found 11 papers, 1 papers with code

Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High Dimensions

no code implementations20 Jan 2022 Mojtaba Sahraee-Ardakan, Melikasadat Emami, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

Empirical observation of high dimensional phenomena, such as the double descent behaviour, has attracted a lot of interest in understanding classical techniques such as kernel methods, and their implications to explain generalization properties of neural networks.

Asymptotics of Ridge Regression in Convolutional Models

no code implementations8 Mar 2021 Mojtaba Sahraee-Ardakan, Tung Mai, Anup Rao, Ryan Rossi, Sundeep Rangan, Alyson K. Fletcher

We show the double descent phenomenon in our experiments for convolutional models and show that our theoretical results match the experiments.

regression

Implicit Bias of Linear RNNs

no code implementations19 Jan 2021 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

The degree of this bias depends on the variance of the transition kernel matrix at initialization and is related to the classic exploding and vanishing gradients problem.

Low-Rank Nonlinear Decoding of $μ$-ECoG from the Primary Auditory Cortex

no code implementations6 May 2020 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Alyson K. Fletcher, Sundeep Rangan, Michael Trumpis, Brinnae Bent, Chia-Han Chiang, Jonathan Viventi

This decoding problem is particularly challenging due to the complexity of neural responses in the auditory cortex and the presence of confounding signals in awake animals.

Dimensionality Reduction

Generalization Error of Generalized Linear Models in High Dimensions

3 code implementations ICML 2020 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

We provide a general framework to characterize the asymptotic generalization error for single-layer neural networks (i. e., generalized linear models) with arbitrary non-linearities, making it applicable to regression as well as classification problems.

BIG-bench Machine Learning regression +1

Inference in Multi-Layer Networks with Matrix-Valued Unknowns

no code implementations26 Jan 2020 Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

We consider the problem of inferring the input and hidden variables of a stochastic multi-layer neural network from an observation of the output.

Inference with Deep Generative Priors in High Dimensions

no code implementations8 Nov 2019 Parthe Pandit, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter, Alyson K. Fletcher

This paper presents a novel algorithm, Multi-Layer Vector Approximate Message Passing (ML-VAMP), for inference in multi-layer stochastic neural networks.

Vocal Bursts Intensity Prediction

High-Dimensional Bernoulli Autoregressive Process with Long-Range Dependence

no code implementations19 Mar 2019 Parthe Pandit, Mojtaba Sahraee-Ardakan, Arash A. Amini, Sundeep Rangan, Alyson K. Fletcher

We derive precise upper bounds on the mean-squared estimation error in terms of the number of samples, dimensions of the process, the lag $p$ and other key statistical properties of the model.

Gaussian Processes Vocal Bursts Intensity Prediction

Rigorous Dynamics and Consistent Estimation in Arbitrarily Conditioned Linear Systems

no code implementations NeurIPS 2017 Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Philip Schniter, Sundeep Rangan

We show that the parameter estimates and mean squared error (MSE) of x in each iteration converge to deterministic limits that can be precisely predicted by a simple set of state evolution (SE) equations.

Joint Dictionary Learning for Example-based Image Super-resolution

no code implementations12 Jan 2017 Mojtaba Sahraee-Ardakan, Mohsen Joneidi

Using the sparse representation coefficients of these LR patches over the LR dictionary, the high-resolution (HR) dictionary is trained by minimizing the reconstruction error of HR sample patches.

Dictionary Learning Image Super-Resolution

Expectation Consistent Approximate Inference: Generalizations and Convergence

no code implementations25 Feb 2016 Alyson K. Fletcher, Mojtaba Sahraee-Ardakan, Sundeep Rangan, Philip Schniter

Approximations of loopy belief propagation, including expectation propagation and approximate message passing, have attracted considerable attention for probabilistic inference problems.

Cannot find the paper you are looking for? You can Submit a new open access paper.