Big-Little Modules are blocks for image models that have two branches: each of which represents a separate block from a deep model and a less deep counterpart. They were proposed as part of the BigLittle-Net architecture. The two branches are fused with a linear combination and unit weights. These two branches are known as Big-Branch (more layers and channels at low resolutions) and Little-Branch (fewer layers and channels at high resolution).
Source: Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech RecognitionPaper | Code | Results | Date | Stars |
---|
Task | Papers | Share |
---|---|---|
Image Classification | 2 | 16.67% |
Speech Synthesis | 1 | 8.33% |
Text-To-Speech Synthesis | 1 | 8.33% |
Fine-Grained Image Classification | 1 | 8.33% |
Fine-Grained Visual Recognition | 1 | 8.33% |
General Classification | 1 | 8.33% |
Image Retrieval | 1 | 8.33% |
Retrieval | 1 | 8.33% |
Object | 1 | 8.33% |
Component | Type |
|
---|---|---|
![]() |
Convolutions | |
![]() |
Convolutions | |
![]() |
Feedforward Networks | |
![]() |
Skip Connections |