no code implementations • 25 Mar 2024 • Yunfei Yang, Han Feng, Ding-Xuan Zhou
Our second result gives new analysis on the covering number of feed-forward neural networks with CNNs as special cases.
no code implementations • 14 Jun 2023 • Yunfei Yang, Ding-Xuan Zhou
It is shown that over-parameterized neural networks can achieve minimax optimal rates of convergence (up to logarithmic factors) for learning functions from certain smooth function classes, if the weights are suitably constrained or regularized.
no code implementations • 4 Apr 2023 • Yunfei Yang, Ding-Xuan Zhou
It is also proven that over-parameterized (deep or shallow) neural networks can achieve nearly optimal rates for nonparametric regression.
no code implementations • 5 Feb 2023 • Yuling Jiao, Yanming Lai, Yang Wang, Haizhao Yang, Yunfei Yang
This paper analyzes the convergence rate of a deep Galerkin method for the weak solution (DGMW) of second-order elliptic partial differential equations on $\mathbb{R}^d$ with Dirichlet, Neumann, and Robin boundary conditions, respectively.
no code implementations • 12 Jun 2022 • Zhen Li, Yunfei Yang
We study the uniform approximation of echo state networks with randomly generated internal weights.
no code implementations • 25 May 2022 • Yunfei Yang
We study how well generative adversarial networks (GAN) learn probability distributions from finite samples by analyzing the convergence rates of these models.
no code implementations • 24 Jan 2022 • Yuling Jiao, Yang Wang, Yunfei Yang
This paper studies the approximation capacity of ReLU neural networks with norm constraint on the weights.
1 code implementation • 22 Nov 2021 • Peizhuo Lv, Hualong Ma, Jiachen Zhou, Ruigang Liang, Kai Chen, Shengzhi Zhang, Yunfei Yang
In this paper, we propose DBIA, a novel data-free backdoor attack against the CV-oriented transformer networks, leveraging the inherent attention mechanism of transformers to generate triggers and injecting the backdoor using the poisoned surrogate dataset.
no code implementations • NeurIPS 2021 • Shiao Liu, Yunfei Yang, Jian Huang, Yuling Jiao, Yang Wang
Our results are also applicable to the Wasserstein bidirectional GAN if the target distribution is assumed to have a bounded support.
no code implementations • 16 Jun 2021 • Yunfei Yang, Haizhang Zhang
Specifically, we show that one can recover a band-limited function by Gaussian or hyper-Gaussian regularized nonuniform sampling series with an exponential convergence rate.
no code implementations • 27 May 2021 • Jian Huang, Yuling Jiao, Zhen Li, Shiao Liu, Yang Wang, Yunfei Yang
This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples.
no code implementations • 29 Jan 2021 • Yunfei Yang, Zhen Li, Yang Wang
Furthermore, it is shown that the approximation error in Wasserstein distance grows at most linearly on the ambient dimension and that the approximation order only depends on the intrinsic dimension of the target distribution.
no code implementations • 25 May 2020 • Yunfei Yang, Zhen Li, Yang Wang
We also give lower bounds of the $L^p (1\le p \le \infty)$ approximation error for Sobolev spaces, which show that our construction of neural network is asymptotically optimal up to a logarithmic factor.
1 code implementation • CVPR 2020 • Shaokai Ye, Kailu Wu, Mu Zhou, Yunfei Yang, Sia Huat Tan, Kaidi Xu, Jiebo Song, Chenglong Bao, Kaisheng Ma
Existing domain adaptation methods aim at learning features that can be generalized among domains.
Ranked #3 on Domain Adaptation on USPS-to-MNIST
no code implementations • ICLR 2019 • Shaokai Ye, Tianyun Zhang, Kaiqi Zhang, Jiayu Li, Kaidi Xu, Yunfei Yang, Fuxun Yu, Jian Tang, Makan Fardad, Sijia Liu, Xiang Chen, Xue Lin, Yanzhi Wang
Motivated by dynamic programming, the proposed method reaches extremely high pruning rate by using partial prunings with moderate pruning rates.