no code implementations • 18 Jun 2022 • Yunjuan Wang, Enayat Ullah, Poorya Mianjy, Raman Arora
Recent works show that adversarial examples exist for random neural networks [Daniely and Schacham, 2020] and that these examples can be found using a single step of gradient ascent [Bubeck et al., 2021].
no code implementations • NeurIPS 2020 • Poorya Mianjy, Raman Arora
We study dropout in two-layer neural networks with rectified linear unit (ReLU) activations.
no code implementations • ICLR 2020 • Raman Arora, Peter Bartlett, Poorya Mianjy, Nathan Srebro
In deep learning, we show that the data-dependent regularizer due to dropout directly controls the Rademacher complexity of the underlying class of deep neural networks.
1 code implementation • 28 May 2019 • Poorya Mianjy, Raman Arora
We give a formal and complete characterization of the explicit regularizer induced by dropout in deep linear networks with squared loss.
no code implementations • NeurIPS 2018 • Md Enayat Ullah, Poorya Mianjy, Teodor Vanislavov Marinov, Raman Arora
We study the statistical and computational aspects of kernel principal component analysis using random Fourier features and show that under mild assumptions, $O(\sqrt{n} \log n)$ features suffices to achieve $O(1/\epsilon^2)$ sample complexity.
1 code implementation • 2 Aug 2018 • Enayat Ullah, Poorya Mianjy, Teodor V. Marinov, Raman Arora
We study the statistical and computational aspects of kernel principal component analysis using random Fourier features and show that under mild assumptions, $O(\sqrt{n} \log n)$ features suffices to achieve $O(1/\epsilon^2)$ sample complexity.
no code implementations • ICML 2018 • Poorya Mianjy, Raman Arora
We revisit convex relaxation based methods for stochastic optimization of principal component analysis (PCA).
no code implementations • ICML 2018 • Teodor Vanislavov Marinov, Poorya Mianjy, Raman Arora
We study streaming algorithms for principal component analysis (PCA) in noisy settings.
no code implementations • ICML 2018 • Poorya Mianjy, Raman Arora, Rene Vidal
Algorithmic approaches endow deep learning systems with implicit bias that helps them generalize even in over-parametrized settings.
no code implementations • NeurIPS 2017 • Raman Arora, Teodor V. Marinov, Poorya Mianjy, Nathan Srebro
We propose novel first-order stochastic approximation algorithms for canonical correlation analysis (CCA).
no code implementations • ICLR 2018 • Raman Arora, Amitabh Basu, Poorya Mianjy, Anirbit Mukherjee
In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU).