Search Results for author: Omri Weinstein

Found 3 papers, 1 papers with code

Training Overparametrized Neural Networks in Sublinear Time

no code implementations9 Aug 2022 Yichuan Deng, Hang Hu, Zhao Song, Omri Weinstein, Danyang Zhuo

The success of deep learning comes at a tremendous computational and energy cost, and the scalability of training massively overparametrized neural networks is becoming a real barrier to the progress of artificial intelligence (AI).

The Complexity of Dynamic Least-Squares Regression

1 code implementation1 Jan 2022 Shunhua Jiang, Binghui Peng, Omri Weinstein

We settle the complexity of dynamic least-squares regression (LSR), where rows and labels $(\mathbf{A}^{(t)}, \mathbf{b}^{(t)})$ can be adaptively inserted and/or deleted, and the goal is to efficiently maintain an $\epsilon$-approximate solution to $\min_{\mathbf{x}^{(t)}} \| \mathbf{A}^{(t)} \mathbf{x}^{(t)} - \mathbf{b}^{(t)} \|_2$ for all $t\in [T]$.

regression

Training (Overparametrized) Neural Networks in Near-Linear Time

no code implementations20 Jun 2020 Jan van den Brand, Binghui Peng, Zhao Song, Omri Weinstein

The slow convergence rate and pathological curvature issues of first-order gradient methods for training deep neural networks, initiated an ongoing effort for developing faster $\mathit{second}$-$\mathit{order}$ optimization algorithms beyond SGD, without compromising the generalization error.

Dimensionality Reduction regression

Cannot find the paper you are looking for? You can Submit a new open access paper.