no code implementations • NeurIPS 2019 • Lei Wu, Qingcan Wang, Chao Ma
We analyze the global convergence of gradient descent for deep linear residual networks by proposing a new initialization: zero-asymmetric (ZAS) initialization.
no code implementations • 10 Apr 2019 • Weinan E, Chao Ma, Qingcan Wang, Lei Wu
In addition, it is also shown that the GD path is uniformly close to the functions given by the related random feature model.
no code implementations • 6 Mar 2019 • Weinan E, Chao Ma, Qingcan Wang
An important part of the regularized model is the usage of a new path norm, called the weighted path norm, as the regularization term.
no code implementations • 1 Jul 2018 • Weinan E, Qingcan Wang
We prove that for analytic functions in low dimension, the convergence rate of the deep neural network approximation is exponential.
no code implementations • ICLR 2019 • Ruying Bao, Sihang Liang, Qingcan Wang
In this paper, we propose a defense method, Featurized Bidirectional Generative Adversarial Networks (FBGAN), to extract the semantic features of the input and filter the non-semantic perturbation.