Search Results for author: Melikasadat Emami

Found 6 papers, 2 papers with code

Kernel Methods and Multi-layer Perceptrons Learn Linear Models in High Dimensions

no code implementations20 Jan 2022 Mojtaba Sahraee-Ardakan, Melikasadat Emami, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

Empirical observation of high dimensional phenomena, such as the double descent behaviour, has attracted a lot of interest in understanding classical techniques such as kernel methods, and their implications to explain generalization properties of neural networks.

Implicit Bias of Linear RNNs

no code implementations19 Jan 2021 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

The degree of this bias depends on the variance of the transition kernel matrix at initialization and is related to the classic exploding and vanishing gradients problem.

Low-Rank Nonlinear Decoding of $μ$-ECoG from the Primary Auditory Cortex

no code implementations6 May 2020 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Alyson K. Fletcher, Sundeep Rangan, Michael Trumpis, Brinnae Bent, Chia-Han Chiang, Jonathan Viventi

This decoding problem is particularly challenging due to the complexity of neural responses in the auditory cortex and the presence of confounding signals in awake animals.

Dimensionality Reduction

Generalization Error of Generalized Linear Models in High Dimensions

3 code implementations ICML 2020 Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher

We provide a general framework to characterize the asymptotic generalization error for single-layer neural networks (i. e., generalized linear models) with arbitrary non-linearities, making it applicable to regression as well as classification problems.

BIG-bench Machine Learning regression +1

Input-Output Equivalence of Unitary and Contractive RNNs

1 code implementation NeurIPS 2019 Melikasadat Emami, Mojtaba Sahraee Ardakan, Sundeep Rangan, Alyson K. Fletcher

Unitary recurrent neural networks (URNNs) have been proposed as a method to overcome the vanishing and exploding gradient problem in modeling data with long-term dependencies.

Cannot find the paper you are looking for? You can Submit a new open access paper.