Search Results for author: Yunfei Yang

Found 15 papers, 2 papers with code

On the rates of convergence for learning with convolutional neural networks

no code implementations25 Mar 2024 Yunfei Yang, Han Feng, Ding-Xuan Zhou

Our second result gives new analysis on the covering number of feed-forward neural networks with CNNs as special cases.

Binary Classification

Nonparametric regression using over-parameterized shallow ReLU neural networks

no code implementations14 Jun 2023 Yunfei Yang, Ding-Xuan Zhou

It is shown that over-parameterized neural networks can achieve minimax optimal rates of convergence (up to logarithmic factors) for learning functions from certain smooth function classes, if the weights are suitably constrained or regularized.

regression

Optimal rates of approximation by shallow ReLU$^k$ neural networks and applications to nonparametric regression

no code implementations4 Apr 2023 Yunfei Yang, Ding-Xuan Zhou

It is also proven that over-parameterized (deep or shallow) neural networks can achieve nearly optimal rates for nonparametric regression.

regression

Convergence Analysis of the Deep Galerkin Method for Weak Solutions

no code implementations5 Feb 2023 Yuling Jiao, Yanming Lai, Yang Wang, Haizhao Yang, Yunfei Yang

This paper analyzes the convergence rate of a deep Galerkin method for the weak solution (DGMW) of second-order elliptic partial differential equations on $\mathbb{R}^d$ with Dirichlet, Neumann, and Robin boundary conditions, respectively.

Universality and approximation bounds for echo state networks with random weights

no code implementations12 Jun 2022 Zhen Li, Yunfei Yang

We study the uniform approximation of echo state networks with randomly generated internal weights.

Learning Distributions by Generative Adversarial Networks: Approximation and Generalization

no code implementations25 May 2022 Yunfei Yang

We study how well generative adversarial networks (GAN) learn probability distributions from finite samples by analyzing the convergence rates of these models.

Generalization Bounds Learning Theory

Approximation bounds for norm constrained neural networks with applications to regression and GANs

no code implementations24 Jan 2022 Yuling Jiao, Yang Wang, Yunfei Yang

This paper studies the approximation capacity of ReLU neural networks with norm constraint on the weights.

regression

DBIA: Data-free Backdoor Injection Attack against Transformer Networks

1 code implementation22 Nov 2021 Peizhuo Lv, Hualong Ma, Jiachen Zhou, Ruigang Liang, Kai Chen, Shengzhi Zhang, Yunfei Yang

In this paper, we propose DBIA, a novel data-free backdoor attack against the CV-oriented transformer networks, leveraging the inherent attention mechanism of transformers to generate triggers and injecting the backdoor using the poisoned surrogate dataset.

Backdoor Attack Image Classification +1

Non-Asymptotic Error Bounds for Bidirectional GANs

no code implementations NeurIPS 2021 Shiao Liu, Yunfei Yang, Jian Huang, Yuling Jiao, Yang Wang

Our results are also applicable to the Wasserstein bidirectional GAN if the target distribution is assumed to have a bounded support.

Exponential Approximation of Band-limited Functions from Nonuniform Sampling by Regularization Methods

no code implementations16 Jun 2021 Yunfei Yang, Haizhang Zhang

Specifically, we show that one can recover a band-limited function by Gaussian or hyper-Gaussian regularized nonuniform sampling series with an exponential convergence rate.

An error analysis of generative adversarial networks for learning distributions

no code implementations27 May 2021 Jian Huang, Yuling Jiao, Zhen Li, Shiao Liu, Yang Wang, Yunfei Yang

This paper studies how well generative adversarial networks (GANs) learn probability distributions from finite samples.

On the capacity of deep generative networks for approximating distributions

no code implementations29 Jan 2021 Yunfei Yang, Zhen Li, Yang Wang

Furthermore, it is shown that the approximation error in Wasserstein distance grows at most linearly on the ambient dimension and that the approximation order only depends on the intrinsic dimension of the target distribution.

Approximation in shift-invariant spaces with deep ReLU neural networks

no code implementations25 May 2020 Yunfei Yang, Zhen Li, Yang Wang

We also give lower bounds of the $L^p (1\le p \le \infty)$ approximation error for Sobolev spaces, which show that our construction of neural network is asymptotically optimal up to a logarithmic factor.

Cannot find the paper you are looking for? You can Submit a new open access paper.