A layer-stress learning framework universally augments deep neural network tasks

14 Nov 2021  ·  Shihao Shao, Yong liu, Qinghua Cui ·

Deep neural networks (DNN) such as Multi-Layer Perception (MLP) and Convolutional Neural Networks (CNN) represent one of the most established deep learning algorithms. Given the tremendous effects of the number of hidden layers on network architecture and performance, it is very important to choose the number of hidden layers but still a serious challenge. More importantly, the current network architectures can only process the information from the last layer of the feature extractor, which greatly limited us to further improve its performance. Here we presented a layer-stress deep learning framework (x-NN) which implemented automatic and wise depth decision on shallow or deep feature map in a deep network through firstly designing enough number of layers and then trading off them by Multi-Head Attention Block. The x-NN can make use of features from various depth layers through attention allocation and then help to make final decision as well. As a result, x-NN showed outstanding prediction ability in the Alzheimer's Disease Classification Technique Challenge PRCV 2021, in which it won the top laurel and outperformed all other AI models. Moreover, the performance of x-NN was verified by one more AD neuroimaging dataset and other AI tasks.

PDF Abstract
No code implementations yet. Submit your code now

Tasks


Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods