Big-Little Net: An Efficient Multi-Scale Feature Representation for Visual and Speech Recognition

In this paper, we propose a novel Convolutional Neural Network (CNN) architecture for learning multi-scale feature representations with good tradeoffs between speed and accuracy. This is achieved by using a multi-branch network, which has different computational complexity at different branches... (read more)

PDF Abstract ICLR 2019 PDF ICLR 2019 Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods used in the Paper


METHOD TYPE
Average Pooling
Pooling Operations
ResNeXt Block
Skip Connection Blocks
Grouped Convolution
Convolutions
ResNeXt
Convolutional Neural Networks
Batch Normalization
Normalization
Bottleneck Residual Block
Skip Connection Blocks
Global Average Pooling
Pooling Operations
Residual Block
Skip Connection Blocks
Linear Layer
Feedforward Networks
Dense Connections
Feedforward Networks
Kaiming Initialization
Initialization
Max Pooling
Pooling Operations
1x1 Convolution
Convolutions
Softmax
Output Functions
Random Horizontal Flip
Image Data Augmentation
Random Resized Crop
Image Data Augmentation
Cosine Annealing
Learning Rate Schedules
Nesterov Accelerated Gradient
Stochastic Optimization
Weight Decay
Regularization
ReLU
Activation Functions
Big-Little Module
Skip Connection Blocks
Big-Little Net
Convolutional Neural Networks
Residual Connection
Skip Connections
Convolution
Convolutions
ResNet
Convolutional Neural Networks