no code implementations • 5 Feb 2024 • Juncai He, Liangchen Liu, Yen-Hsi Richard Tsai
This paper investigates the impact of multiscale data on machine learning algorithms, particularly in the context of deep learning.
no code implementations • 31 Jan 2024 • Yahong Yang, Juncai He
Constructing the architecture of a neural network is a challenging pursuit for the machine learning community, and the dilemma of whether to go deeper or wider remains a persistent question.
no code implementations • 27 Dec 2023 • Juncai He, Tong Mao, Jinchao Xu
Additionally, through an exploration of the representation power of deep ReLU$^k$ networks for shallow networks, we reveal that deep ReLU$^k$ networks can approximate functions from a range of variation spaces, extending beyond those generated solely by the ReLU$^k$ activation function.
no code implementations • 21 Dec 2023 • Juncai He, Jinchao Xu
In this study, we establish that deep neural networks employing ReLU and ReLU$^2$ activation functions can effectively represent Lagrange finite element functions of any order on various simplicial meshes in arbitrary dimensions.
no code implementations • 16 Oct 2023 • Juncai He, Xinliang Liu, Jinchao Xu
In this work, we propose a concise neural operator architecture for operator learning.
1 code implementation • 21 Sep 2023 • Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Juncai He, Ziche Liu, Zhiyi Zhang, Junying Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, Jinchao Xu
This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models.
no code implementations • 10 Aug 2023 • Juncai He
This paper is devoted to studying the optimal expressive power of ReLU deep neural networks (DNNs) and its application in approximation via the Kolmogorov Superposition Theorem.
no code implementations • 5 Jul 2023 • Liangchen Liu, Juncai He, Richard Tsai
We assume that the data manifold is smooth and is embedded in a Euclidean space, and our objective is to reveal the impact of the data manifold's extrinsic geometry on the regression.
no code implementations • 2 Feb 2023 • Jianqing Zhu, Juncai He, Lian Zhang, Jinchao Xu
By investigating iterative methods for a constrained linear model, we propose a new class of fully connected V-cycle MgNet for long-term time series forecasting, which is one of the most difficult tasks in forecasting.
no code implementations • 2 Feb 2023 • Jianqing Zhu, Juncai He, Qiumei Huang
This study used a multigrid-based convolutional neural network architecture known as MgNet in operator learning to solve numerical partial differential equations (PDEs).
no code implementations • 1 Mar 2022 • Juncai He, Richard Tsai, Rachel Ward
In this setting, a typical neural network defines a function that takes a finite number of vectors in the embedding space as input.
no code implementations • 14 Dec 2021 • Juncai He, Jinchao Xu, Lian Zhang, Jianqing Zhu
We propose a constrained linear data-feature-mapping model as an interpretable mathematical model for image classification using a convolutional neural network (CNN).
no code implementations • 1 Sep 2021 • Juncai He, Lin Li, Jinchao Xu
This paper focuses on establishing $L^2$ approximation properties for deep ReLU convolutional neural networks (CNNs) in two-dimensional space.
no code implementations • 1 Sep 2021 • Qipin Chen, Wenrui Hao, Juncai He
To address this challenge, we study neural networks from a nonlinear computation point of view and propose a novel weight initialization strategy that is based on the linear product structure (LPS) of neural networks.
no code implementations • 10 May 2021 • Juncai He, Lin Li, Jinchao Xu
We study ReLU deep neural networks (DNNs) by investigating their connections with the hierarchical basis method in finite element methods.
1 code implementation • 23 Nov 2019 • Juncai He, Yuyan Chen, Lian Zhang, Jinchao Xu
In this paper, we propose a constrained linear data-feature mapping model as an interpretable mathematical model for image classification using convolutional neural network (CNN) such as the ResNet.
no code implementations • ICLR 2019 • Xiaodong Jia, Liang Zhao, Lian Zhang, Juncai He, Jinchao Xu
We propose a new approach, known as the iterative regularized dual averaging (iRDA), to improve the efficiency of convolutional neural networks (CNN) by significantly reducing the redundancy of the model without reducing its accuracy.
no code implementations • 29 Jan 2019 • Juncai He, Jinchao Xu
We develop a unified model, known as MgNet, that simultaneously recovers some convolutional neural networks (CNN) for image classification and multigrid (MG) methods for solving discretized partial differential equations (PDEs).
no code implementations • 11 Jul 2018 • Juncai He, Xiaodong Jia, Jinchao Xu, Lian Zhang, Liang Zhao
Compressed Sensing using $\ell_1$ regularization is among the most powerful and popular sparsification technique in many applications, but why has it not been used to obtain sparse deep learning model such as convolutional neural network (CNN)?