Search Results for author: Qingcan Wang

Found 5 papers, 0 papers with code

Global Convergence of Gradient Descent for Deep Linear Residual Networks

no code implementations NeurIPS 2019 Lei Wu, Qingcan Wang, Chao Ma

We analyze the global convergence of gradient descent for deep linear residual networks by proposing a new initialization: zero-asymmetric (ZAS) initialization.

Analysis of the Gradient Descent Algorithm for a Deep Neural Network Model with Skip-connections

no code implementations10 Apr 2019 Weinan E, Chao Ma, Qingcan Wang, Lei Wu

In addition, it is also shown that the GD path is uniformly close to the functions given by the related random feature model.

A Priori Estimates of the Population Risk for Residual Networks

no code implementations6 Mar 2019 Weinan E, Chao Ma, Qingcan Wang

An important part of the regularized model is the usage of a new path norm, called the weighted path norm, as the regularization term.

Exponential Convergence of the Deep Neural Network Approximation for Analytic Functions

no code implementations1 Jul 2018 Weinan E, Qingcan Wang

We prove that for analytic functions in low dimension, the convergence rate of the deep neural network approximation is exponential.

Featurized Bidirectional GAN: Adversarial Defense via Adversarially Learned Semantic Inference

no code implementations ICLR 2019 Ruying Bao, Sihang Liang, Qingcan Wang

In this paper, we propose a defense method, Featurized Bidirectional Generative Adversarial Networks (FBGAN), to extract the semantic features of the input and filter the non-semantic perturbation.

Adversarial Defense

Cannot find the paper you are looking for? You can Submit a new open access paper.