Search Results for author: Suho Lee

Found 2 papers, 2 papers with code

StochCA: A Novel Approach for Exploiting Pretrained Models with Cross-Attention

1 code implementation25 Feb 2024 Seungwon Seo, Suho Lee, Sangheum Hwang

By doing so, queries and channel-mixing multi-layer perceptron layers of a target model are fine-tuned to target tasks to learn how to effectively exploit rich representations of pretrained models.

Domain Generalization Transfer Learning

Few-shot Fine-tuning is All You Need for Source-free Domain Adaptation

1 code implementation3 Apr 2023 Suho Lee, Seungwon Seo, Jihyo Kim, Yejin Lee, Sangheum Hwang

These limitations include a lack of principled ways to determine optimal hyperparameters and performance degradation when the unlabeled target data fail to meet certain requirements such as a closed-set and identical label distribution to the source data.

Source-Free Domain Adaptation Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.