Search Results for author: Poorya Mianjy

Found 11 papers, 2 papers with code

Adversarial Robustness is at Odds with Lazy Training

no code implementations18 Jun 2022 Yunjuan Wang, Enayat Ullah, Poorya Mianjy, Raman Arora

Recent works show that adversarial examples exist for random neural networks [Daniely and Schacham, 2020] and that these examples can be found using a single step of gradient ascent [Bubeck et al., 2021].

Adversarial Robustness Learning Theory

On Convergence and Generalization of Dropout Training

no code implementations NeurIPS 2020 Poorya Mianjy, Raman Arora

We study dropout in two-layer neural networks with rectified linear unit (ReLU) activations.

Dropout: Explicit Forms and Capacity Control

no code implementations ICLR 2020 Raman Arora, Peter Bartlett, Poorya Mianjy, Nathan Srebro

In deep learning, we show that the data-dependent regularizer due to dropout directly controls the Rademacher complexity of the underlying class of deep neural networks.

BIG-bench Machine Learning Matrix Completion

On Dropout and Nuclear Norm Regularization

1 code implementation28 May 2019 Poorya Mianjy, Raman Arora

We give a formal and complete characterization of the explicit regularizer induced by dropout in deep linear networks with squared loss.

Streaming Kernel PCA with \tilde{O}(\sqrt{n}) Random Features

no code implementations NeurIPS 2018 Md Enayat Ullah, Poorya Mianjy, Teodor Vanislavov Marinov, Raman Arora

We study the statistical and computational aspects of kernel principal component analysis using random Fourier features and show that under mild assumptions, $O(\sqrt{n} \log n)$ features suffices to achieve $O(1/\epsilon^2)$ sample complexity.

Streaming Kernel PCA with $\tilde{O}(\sqrt{n})$ Random Features

1 code implementation2 Aug 2018 Enayat Ullah, Poorya Mianjy, Teodor V. Marinov, Raman Arora

We study the statistical and computational aspects of kernel principal component analysis using random Fourier features and show that under mild assumptions, $O(\sqrt{n} \log n)$ features suffices to achieve $O(1/\epsilon^2)$ sample complexity.

On the Implicit Bias of Dropout

no code implementations ICML 2018 Poorya Mianjy, Raman Arora, Rene Vidal

Algorithmic approaches endow deep learning systems with implicit bias that helps them generalize even in over-parametrized settings.

Stochastic Approximation for Canonical Correlation Analysis

no code implementations NeurIPS 2017 Raman Arora, Teodor V. Marinov, Poorya Mianjy, Nathan Srebro

We propose novel first-order stochastic approximation algorithms for canonical correlation analysis (CCA).

Understanding Deep Neural Networks with Rectified Linear Units

no code implementations ICLR 2018 Raman Arora, Amitabh Basu, Poorya Mianjy, Anirbit Mukherjee

In this paper we investigate the family of functions representable by deep neural networks (DNN) with rectified linear units (ReLU).

Cannot find the paper you are looking for? You can Submit a new open access paper.