ChimeraMix: Image Classification on Small Datasets via Masked Feature Mixing

23 Feb 2022  ยท  Christoph Reinders, Frederik Schubert, Bodo Rosenhahn ยท

Deep convolutional neural networks require large amounts of labeled data samples. For many real-world applications, this is a major limitation which is commonly treated by augmentation methods. In this work, we address the problem of learning deep neural networks on small datasets. Our proposed architecture called ChimeraMix learns a data augmentation by generating compositions of instances. The generative model encodes images in pairs, combines the features guided by a mask, and creates new samples. For evaluation, all methods are trained from scratch without any additional data. Several experiments on benchmark datasets, e.g. ciFAIR-10, STL-10, and ciFAIR-100, demonstrate the superior performance of ChimeraMix compared to current state-of-the-art methods for classification on small datasets.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Small Data Image Classification ciFAIR-10 50 samples per class ChimeraMix+AutoAugment Accuracy 70.09 # 1
Small Data Image Classification ciFAIR-10 50 samples per class ChimeraMix Accuracy 67.30 # 2
Small Data Image Classification CIFAR-100, 1000 Labels ChimeraMix Accuracy 32.72 # 2
Small Data Image Classification CIFAR-100, 1000 Labels ChimeraMix+AutoAugment Accuracy 35.02 # 1
Small Data Image Classification CIFAR-10, 1000 Labels ChimeraMix+AutoAugment Accuracy (%) 76.76 # 1
Small Data Image Classification CIFAR-10, 1000 Labels ChimeraMix Accuracy (%) 74.96 # 2
Small Data Image Classification CIFAR-10, 100 Labels ChimeraMix+AutoAugment Accuracy (%) 49.75 # 1
Small Data Image Classification CIFAR-10, 100 Labels ChimeraMix Accuracy (%) 47.6 # 2
Small Data Image Classification CIFAR-10, 500 Labels ChimeraMix Accuracy (%) 67.3 # 2
Small Data Image Classification CIFAR-10, 500 Labels ChimeraMix+AutoAugment Accuracy (%) 70.09 # 1

Methods


No methods listed for this paper. Add relevant methods here