1 code implementation • 21 Sep 2022 • Qingbei Guo, Xiao-Jun Wu, Zhiquan Feng, Tianyang Xu, Cong Hu
To tackle this issue, we first introduce a new attention dimension, i. e., depth, in addition to existing attention dimensions such as channel, spatial, and branch, and present a novel selective depth attention network to symmetrically handle multi-scale objects in various vision tasks.
1 code implementation • 3 Mar 2021 • Qingbei Guo, Xiao-Jun Wu, Josef Kittler, Zhiquan Feng
To address this computational complexity issue, we introduce a novel \emph{architecture parameterisation} based on scaled sigmoid function, and propose a general \emph{Differentiable Neural Architecture Learning} (DNAL) method to optimize the neural architecture without the need to evaluate candidate neural networks.
1 code implementation • 29 Sep 2020 • Qingbei Guo, Xiao-Jun Wu, Josef Kittler, Zhiquan Feng
To tackle this issue, we propose a novel method of designing self-grouping convolutional neural networks, called SG-CNN, in which the filters of each convolutional layer group themselves based on the similarity of their importance vectors.