Meta Variance Transfer: Learning to Augment from the Others

Humans have the ability to robustly recognize objects with various factors of variations such as nonrigid transformation, background noise, and change in lighting conditions. However, deep learning frameworks generally require huge amount of data with instances under diverse variations, to train a robust model. To alleviate the need of collecting large data and better learn from scarce samples, we propose a novel meta-learning method which learns to transfer factors of variations from one class to another, such that it can improve the classification performance on unseen examples. Transferred variations generate virtual samples that augment the feature space of the target class during training, simulating upcoming query samples with similar variations. By sharing factors of variations across different classes, the model becomes more robust to variations in the unseen examples and tasks using small number of examples per class. We validate our model on multiple benchmark datasets for few-shot classification and face recognition, on which our model significantly improves the performance of the base model, outperforming relevant baselines.

PDF ICML 2020 PDF

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here