1 code implementation • CVPR 2023 • Junha Song, Jungsoo Lee, In So Kweon, Sungha Choi
Second, our novel self-distilled regularization controls the output of the meta networks not to deviate significantly from the output of the frozen original networks, thereby preserving well-trained knowledge from the source domain.
no code implementations • 16 Dec 2022 • Junha Song, KwanYong Park, Inkyu Shin, Sanghyun Woo, Chaoning Zhang, In So Kweon
In addition, to prevent overfitting of the TTA model, we devise novel regularization which modulates the adaptation rates using domain-similarity between the source and the current target domain.
no code implementations • 30 Jul 2022 • Chaoning Zhang, Chenshuang Zhang, Junha Song, John Seon Keun Yi, Kang Zhang, In So Kweon
Masked autoencoders are scalable vision learners, as the title of MAE \cite{he2022masked}, which suggests that self-supervised learning (SSL) in vision might undertake a similar trajectory as in NLP.