no code implementations • 1 Feb 2023 • Yuesheng Xu
Inspired by the human education process which arranges learning in grades, we propose a multi-grade learning model: We successively solve a number of optimization problems of small sizes, which are organized in grades, to learn a shallow neural network for each grade.
no code implementations • 27 Jul 2022 • Yuesheng Xu, Taishan Zeng
Noting that DNNs have an intrinsic multi-scale structure which is favorable for adaptive representation of functions, by employing a penalty with multiple parameters, we develop DNNs with a multi-scale sparse regularization (SDNN) for effectively representing functions having certain singularities.
no code implementations • 13 May 2022 • Wentao Huang, Yuesheng Xu, Haizhang Zhang
In this current work, we study the convergence of deep neural networks as the depth tends to infinity for two other important activation functions: the leaky ReLU and the sigmoid function.
no code implementations • 28 Sep 2021 • Yuesheng Xu, Haizhang Zhang
Based on the conditions, we present sufficient conditions for piecewise convergence of general deep ReLU networks with increasing widths, and as well as pointwise convergence of deep ReLU convolutional neural networks.
no code implementations • 27 Jul 2021 • Yuesheng Xu, Haizhang Zhang
We explore convergence of deep neural networks with the popular ReLU activation function, as the depth of the networks tends to infinity.
no code implementations • 13 Jun 2019 • Ida Häggström, Yizun Lin, Si Li, Andrzej Krol, Yuesheng Xu, C. Ross Schmidtlein
For the cardiac/lung phantom, an additional cardiac gated 2D-OSEM set was reconstructed.