RandoMix: A mixed sample data augmentation method with multiple mixed modes

18 May 2022  ·  Xiaoliang Liu, Furao Shen, Jian Zhao, Changhai Nie ·

Data augmentation plays a crucial role in enhancing the robustness and performance of machine learning models across various domains. In this study, we introduce a novel mixed-sample data augmentation method called RandoMix. RandoMix is specifically designed to simultaneously address robustness and diversity challenges. It leverages a combination of linear and mask-mixed modes, introducing flexibility in candidate selection and weight adjustments. We evaluate the effectiveness of RandoMix on diverse datasets, including CIFAR-10/100, Tiny-ImageNet, ImageNet, and Google Speech Commands. Our results demonstrate its superior performance compared to existing techniques such as Mixup, CutMix, Fmix, and ResizeMix. Notably, RandoMix excels in enhancing model robustness against adversarial noise, natural noise, and sample occlusion. The comprehensive experimental results and insights into parameter tuning underscore the potential of RandoMix as a versatile and effective data augmentation method. Moreover, it seamlessly integrates into the training pipeline.

PDF Abstract
No code implementations yet. Submit your code now

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here