no code implementations • 20 Jan 2022 • Mojtaba Sahraee-Ardakan, Melikasadat Emami, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher
Empirical observation of high dimensional phenomena, such as the double descent behaviour, has attracted a lot of interest in understanding classical techniques such as kernel methods, and their implications to explain generalization properties of neural networks.
no code implementations • 21 Dec 2021 • Melikasadat Emami, Dung Tran, Kazuhito Koishida
Improving generalization is a major challenge in audio classification due to labeled data scarcity.
no code implementations • 19 Jan 2021 • Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher
The degree of this bias depends on the variance of the transition kernel matrix at initialization and is related to the classic exploding and vanishing gradients problem.
no code implementations • 6 May 2020 • Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Alyson K. Fletcher, Sundeep Rangan, Michael Trumpis, Brinnae Bent, Chia-Han Chiang, Jonathan Viventi
This decoding problem is particularly challenging due to the complexity of neural responses in the auditory cortex and the presence of confounding signals in awake animals.
3 code implementations • ICML 2020 • Melikasadat Emami, Mojtaba Sahraee-Ardakan, Parthe Pandit, Sundeep Rangan, Alyson K. Fletcher
We provide a general framework to characterize the asymptotic generalization error for single-layer neural networks (i. e., generalized linear models) with arbitrary non-linearities, making it applicable to regression as well as classification problems.
1 code implementation • NeurIPS 2019 • Melikasadat Emami, Mojtaba Sahraee Ardakan, Sundeep Rangan, Alyson K. Fletcher
Unitary recurrent neural networks (URNNs) have been proposed as a method to overcome the vanishing and exploding gradient problem in modeling data with long-term dependencies.