Domain generalization aims to improve the generalization capability of machine learning systems to out-of-distribution (OOD) data.
Furthermore, based on LibFewShot, we provide comprehensive evaluations on multiple benchmarks with various backbone architectures to evaluate common pitfalls and effects of different training tricks.
Importantly, we highlight the value and importance of the distribution diversity in the augmentation-based pretext few-shot tasks, which can effectively alleviate the overfitting problem and make the few-shot model learn more robust feature representations.
Conventional data augmentation realized by performing simple pre-processing operations (\eg, rotation, crop, \etc) has been validated for its advantage in enhancing the performance for medical image segmentation.
By introducing an unified optimization goal, DeepAugNet intends to combine the data augmentation and the deep model training in an end-to-end training manner which is realized by simultaneously training a hybrid architecture of dueling deep Q-learning algorithm and a surrogate deep model.