no code implementations • 27 Dec 2023 • Juncai He, Tong Mao, Jinchao Xu
Additionally, through an exploration of the representation power of deep ReLU$^k$ networks for shallow networks, we reveal that deep ReLU$^k$ networks can approximate functions from a range of variation spaces, extending beyond those generated solely by the ReLU$^k$ activation function.
no code implementations • 7 Aug 2023 • Hrushikesh Mhaskar, Tong Mao
In this paper, we present a sharper version of the results in the paper Dimension independent bounds for general shallow networks; Neural Networks, \textbf{123} (2020), 142-152.
no code implementations • 24 Jul 2023 • Tong Mao, Ding-Xuan Zhou
We show that ReLU shallow neural networks with $m$ hidden neurons can uniformly approximate functions from the H\"older space $W_\infty^r([-1, 1]^d)$ with rates $O((\log m)^{\frac{1}{2} +d}m^{-\frac{r}{d}\frac{d+2}{d+4}})$ when $r<d/2 +2$.
no code implementations • 2 Mar 2023 • Katarina Doctor, Tong Mao, Hrushikesh Mhaskar
This involves creating a grid on the hypothetical spaces of data sets and algorithms so as to identify a finite set of probability distributions from which the data sets are sampled and a finite set of algorithms.
no code implementations • 2 Jul 2021 • Tong Mao, Zhongjie Shi, Ding-Xuan Zhou
We consider a family of deep neural networks consisting of two groups of convolutional layers, a downsampling operator, and a fully connected layer.