AssembleNet: Searching for Multi-Stream Neural Connectivity in Video Architectures

Learning to represent videos is a very challenging task both algorithmically and computationally. Standard video CNN architectures have been designed by directly extending architectures devised for image understanding to include the time dimension, using modules such as 3D convolutions, or by using two-stream design to capture both appearance and motion in videos... We interpret a video CNN as a collection of multi-stream convolutional blocks connected to each other, and propose the approach of automatically finding neural architectures with better connectivity and spatio-temporal interactions for video understanding. This is done by evolving a population of overly-connected architectures guided by connection weight learning. Architectures combining representations that abstract different input types (i.e., RGB and optical flow) at multiple temporal resolutions are searched for, allowing different types or sources of information to interact with each other. Our method, referred to as AssembleNet, outperforms prior approaches on public video datasets, in some cases by a great margin. We obtain 58.6% mAP on Charades and 34.27% accuracy on Moments-in-Time. read more

PDF Abstract ICLR 2020 PDF ICLR 2020 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Action Classification Charades AssembleNet MAP 58.6 # 4
Action Classification Charades AssembleNet-101 MAP 58.6 # 4
Action Classification Moments in Time AssembleNet Top 1 Accuracy 34.27% # 10
Top 5 Accuracy 62.71% # 4
Multimodal Activity Recognition Moments in Time Dataset AssembleNet Top-1 (%) 34.27 # 1
Top-5 (%) 62.71 # 1


No methods listed for this paper. Add relevant methods here