Channel Shuffle is an operation to help information flow across feature channels in convolutional neural networks. It was used as part of the ShuffleNet architecture.
If we allow a group convolution to obtain input data from different groups, the input and output channels will be fully related. Specifically, for the feature map generated from the previous group layer, we can first divide the channels in each group into several subgroups, then feed each group in the next layer with different subgroups.
The above can be efficiently and elegantly implemented by a channel shuffle operation: suppose a convolutional layer with $g$ groups whose output has $g \times n$ channels; we first reshape the output channel dimension into $\left(g, n\right)$, transposing and then flattening it back as the input of next layer. Channel shuffle is also differentiable, which means it can be embedded into network structures for end-to-end training.
Source: ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile DevicesPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Semantic Segmentation | 13 | 10.92% |
Object Detection | 12 | 10.08% |
Image Classification | 10 | 8.40% |
Deep Learning | 5 | 4.20% |
Real-Time Semantic Segmentation | 5 | 4.20% |
General Classification | 5 | 4.20% |
Model Compression | 4 | 3.36% |
Decoder | 3 | 2.52% |
Object | 3 | 2.52% |
Component | Type |
|
---|---|---|
🤖 No Components Found | You can add them if they exist; e.g. Mask R-CNN uses RoIAlign |