no code implementations • 12 Dec 2022 • Changcun Huang
This paper first constructs a typical solution of ResNets for multi-category classifications by the principle of gate-network controls and deep-layer classifications, from which a general interpretation of the ResNet architecture is given and the performance mechanism is explained.
no code implementations • 16 Aug 2022 • Changcun Huang
To deep-layer networks, we present a general result called sparse-matrix principle, which could describe some basic behavior of deep layers and explain the phenomenon of the sparse-activation mode that appears in engineering applications associated with brain science; an advantage of deep layers compared to shallower ones is manifested in this principle.
no code implementations • 15 Aug 2022 • Changcun Huang
To the encoder part, under the main use of dimensionality reduction, we investigate its two fundamental properties: bijective maps and data disentangling.
no code implementations • 24 Jan 2022 • Changcun Huang
This paper aims to interpret the mechanism of feedforward ReLU networks by exploring their solutions for piecewise linear functions, through the deduction from basic rules.
no code implementations • 16 Jun 2019 • Changcun Huang
Finally, generalize some of the conclusions of ReLU deep learning to the case of sigmoid-unit deep learning.