Unleash the Potential of Adaptation Models via Dynamic Domain Labels

In this paper, we propose an embarrassing simple yet highly effective adversarial domain adaptation (ADA) method for effectively training models for alignment. We view ADA problem primarily from a neural network memorization perspective and point out a fundamental dilemma, in that the real-world data often exhibits an imbalanced distribution where the majority data clusters typically dominate and biase the adaptation process. Unlike prior works that either attempt loss re-weighting or data re-sampling for alleviating this defect, we introduce a new concept of dynamic domain labels (DDLs) to replace the original immutable domain labels on the fly. DDLs adaptively and timely transfer the model attention from over-memorized aligned data to those easily overlooked samples, which allows each sample can be well studied and fully unleashes the potential of adaption model. Albeit simple, this dynamic adversarial domain adaptation (DADA) framework with DDLs effectively promotes adaptation. We demonstrate through empirical results on real and synthetic data as well as toy games that our method leads to efficient training without bells and whistles, while being robust to different backbones.

PDF Abstract

Results from the Paper

  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.