no code implementations • 27 Dec 2023 • Juncai He, Tong Mao, Jinchao Xu
Additionally, through an exploration of the representation power of deep ReLU$^k$ networks for shallow networks, we reveal that deep ReLU$^k$ networks can approximate functions from a range of variation spaces, extending beyond those generated solely by the ReLU$^k$ activation function.
no code implementations • 21 Dec 2023 • Juncai He, Jinchao Xu
In this study, we establish that deep neural networks employing ReLU and ReLU$^2$ activation functions can effectively represent Lagrange finite element functions of any order on various simplicial meshes in arbitrary dimensions.
no code implementations • 16 Oct 2023 • Juncai He, Xinliang Liu, Jinchao Xu
In this work, we propose a concise neural operator architecture for operator learning.
1 code implementation • 21 Sep 2023 • Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Juncai He, Ziche Liu, Zhiyi Zhang, Junying Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, Jinchao Xu
This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models.
no code implementations • 17 May 2023 • Jongho Park, Jinchao Xu
We propose a new training algorithm, named DualFL (Dualized Federated Learning), for solving distributed optimization problems in federated learning.
no code implementations • 2 Feb 2023 • Jianqing Zhu, Juncai He, Lian Zhang, Jinchao Xu
By investigating iterative methods for a constrained linear model, we propose a new class of fully connected V-cycle MgNet for long-term time series forecasting, which is one of the most difficult tasks in forecasting.
no code implementations • 9 Aug 2022 • Qingguo Hong, Jonathan W. Siegel, Qinyang Tan, Jinchao Xu
Our empirical studies also show that neural networks with the Hat activation function are trained significantly faster using stochastic gradient descent and ADAM.
no code implementations • 14 Dec 2021 • Juncai He, Jinchao Xu, Lian Zhang, Jianqing Zhu
We propose a constrained linear data-feature-mapping model as an interpretable mathematical model for image classification using a convolutional neural network (CNN).
no code implementations • 1 Sep 2021 • Juncai He, Lin Li, Jinchao Xu
This paper focuses on establishing $L^2$ approximation properties for deep ReLU convolutional neural networks (CNNs) in two-dimensional space.
no code implementations • 28 Jun 2021 • Jonathan W. Siegel, Jinchao Xu
In this article, we provide a solution to this problem by proving sharp lower bounds on the approximation rates for shallow neural networks, which are obtained by lower bounding the $L^2$-metric entropy of the convex hull of the neural network basis functions.
no code implementations • 28 Jun 2021 • Jonathan W. Siegel, Jinchao Xu
We study the variation space corresponding to a dictionary of functions in $L^2(\Omega)$ for a bounded domain $\Omega\subset \mathbb{R}^d$.
no code implementations • 10 May 2021 • Juncai He, Lin Li, Jinchao Xu
We study ReLU deep neural networks (DNNs) by investigating their connections with the hierarchical basis method in finite element methods.
no code implementations • 29 Jan 2021 • Jonathan W. Siegel, Jinchao Xu
This result gives sharp lower bounds on the $L^2$-approximation rates, metric entropy, and $n$-widths for variation spaces corresponding to neural networks with a range of important activation functions, including ReLU$^k$ activation functions and sigmoidal activation functions with bounded variation.
no code implementations • 14 Dec 2020 • Jonathan W. Siegel, Jinchao Xu
We show that as the smoothness index $s$ of $f$ increases, shallow neural networks with ReLU$^k$ activation function obtain an improved approximation rate up to a best possible rate of $O(n^{-(k+1)}\log(n))$ in $L^2$, independent of the dimension $d$.
Numerical Analysis Numerical Analysis 41A25
1 code implementation • 21 Aug 2020 • Jonathan W. Siegel, Jianhong Chen, Pengchuan Zhang, Jinchao Xu
The adaptive weighting we introduce corresponds to a novel regularizer based on the logarithm of the absolute value of the weights.
1 code implementation • 23 Nov 2019 • Juncai He, Yuyan Chen, Lian Zhang, Jinchao Xu
In this paper, we propose a constrained linear data-feature mapping model as an interpretable mathematical model for image classification using convolutional neural network (CNN) such as the ResNet.
no code implementations • 3 Oct 2019 • Jianhong Chen, Huang Huang, Wenrui Hao, Jinchao Xu
Pulse feeling, representing the tactile arterial palpation of the heartbeat, has been widely used in traditional Chinese medicine (TCM) to diagnose various diseases.
no code implementations • ICLR 2019 • Xiaodong Jia, Liang Zhao, Lian Zhang, Juncai He, Jinchao Xu
We propose a new approach, known as the iterative regularized dual averaging (iRDA), to improve the efficiency of convolutional neural networks (CNN) by significantly reducing the redundancy of the model without reducing its accuracy.
no code implementations • 4 Apr 2019 • Jonathan W. Siegel, Jinchao Xu
Our first result concerns the rate of approximation of a two layer neural network with a polynomially-decaying non-sigmoidal activation function.
no code implementations • 29 Jan 2019 • Juncai He, Jinchao Xu
We develop a unified model, known as MgNet, that simultaneously recovers some convolutional neural networks (CNN) for image classification and multigrid (MG) methods for solving discretized partial differential equations (PDEs).
no code implementations • 11 Jul 2018 • Juncai He, Xiaodong Jia, Jinchao Xu, Lian Zhang, Liang Zhao
Compressed Sensing using $\ell_1$ regularization is among the most powerful and popular sparsification technique in many applications, but why has it not been used to obtain sparse deep learning model such as convolutional neural network (CNN)?