Paper

Better Pseudo-label: Joint Domain-aware Label and Dual-classifier for Semi-supervised Domain Generalization

With the goal of directly generalizing trained model to unseen target domains, domain generalization (DG), a newly proposed learning paradigm, has attracted considerable attention. Previous DG models usually require a sufficient quantity of annotated samples from observed source domains during training. In this paper, we relax this requirement about full annotation and investigate semi-supervised domain generalization (SSDG) where only one source domain is fully annotated along with the other domains totally unlabeled in the training process. With the challenges of tackling the domain gap between observed source domains and predicting unseen target domains, we propose a novel deep framework via joint domain-aware labels and dual-classifier to produce high-quality pseudo-labels. Concretely, to predict accurate pseudo-labels under domain shift, a domain-aware pseudo-labeling module is developed. Also, considering inconsistent goals between generalization and pseudo-labeling: former prevents overfitting on all source domains while latter might overfit the unlabeled source domains for high accuracy, we employ a dual-classifier to independently perform pseudo-labeling and domain generalization in the training process. When accurate pseudo-labels are generated for unlabeled source domains, the domain mixup operation is applied to augment new domains between labeled and unlabeled domains, which is beneficial for boosting the generalization capability of the model. Extensive results on publicly available DG benchmark datasets show the efficacy of our proposed SSDG method.

Results in Papers With Code
(↓ scroll down to see all results)