MixMo: Mixing Multiple Inputs for Multiple Outputs via Deep Subnetworks

ICCV 2021  ·  Alexandre Rame, Remy Sun, Matthieu Cord ·

Recent strategies achieved ensembling "for free" by fitting concurrently diverse subnetworks inside a single base network. The main idea during training is that each subnetwork learns to classify only one of the multiple inputs simultaneously provided. However, the question of how to best mix these multiple inputs has not been studied so far. In this paper, we introduce MixMo, a new generalized framework for learning multi-input multi-output deep subnetworks. Our key motivation is to replace the suboptimal summing operation hidden in previous approaches by a more appropriate mixing mechanism. For that purpose, we draw inspiration from successful mixed sample data augmentations. We show that binary mixing in features - particularly with rectangular patches from CutMix - enhances results by making subnetworks stronger and more diverse. We improve state of the art for image classification on CIFAR-100 and Tiny ImageNet datasets. Our easy to implement models notably outperform data augmented deep ensembles, without the inference and memory overheads. As we operate in features and simply better leverage the expressiveness of large networks, we open a new line of research complementary to previous works.

PDF Abstract ICCV 2021 PDF ICCV 2021 Abstract
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Image Classification CIFAR-10 WRN-28-10 Percentage correct 97.73 # 63
PARAMS 36.5M # 220
Image Classification CIFAR-100 WRN-28-10 Percentage correct 85.77 # 58
Image Classification CIFAR-100 WRN-28-10 * 3 Percentage correct 86.81 # 51
Image Classification Tiny ImageNet Classification PreActResNet-18-3 Validation Acc 70.24% # 15

Methods