Search Results for author: Chenwei Wu

Found 5 papers, 2 papers with code

Beyond Lazy Training for Over-parameterized Tensor Decomposition

no code implementations NeurIPS 2020 Xiang Wang, Chenwei Wu, Jason D. Lee, Tengyu Ma, Rong Ge

We show that in a lazy training regime (similar to the NTK regime for neural networks) one needs at least $m = \Omega(d^{l-1})$, while a variant of gradient descent can find an approximate tensor when $m = O^*(r^{2. 5l}\log d)$.

Tensor Decomposition

Secure Data Sharing With Flow Model

1 code implementation24 Sep 2020 Chenwei Wu, Chenzhuang Du, Yang Yuan

In the classical multi-party computation setting, multiple parties jointly compute a function without revealing their own input data.

Image Classification Privacy Preserving Deep Learning

Guarantees for Tuning the Step Size using a Learning-to-Learn Approach

1 code implementation30 Jun 2020 Xiang Wang, Shuai Yuan, Chenwei Wu, Rong Ge

Solving this problem using a learning-to-learn approach -- using meta-gradient descent on a meta-objective based on the trajectory that the optimizer generates -- was recently shown to be effective.

No Spurious Local Minima in a Two Hidden Unit ReLU Network

no code implementations ICLR 2018 Chenwei Wu, Jiajun Luo, Jason D. Lee

Deep learning models can be efficiently optimized via stochastic gradient descent, but there is little theoretical evidence to support this.

Cannot find the paper you are looking for? You can Submit a new open access paper.