no code implementations • 3 Nov 2023 • Changdae Oh, Hyesu Lim, Mijoo Kim, Jaegul Choo, Alexander Hauptmann, Zhi-Qi Cheng, Kyungwoo Song
Robust fine-tuning aims to ensure performance on out-of-distribution (OOD) samples, which is sometimes compromised by pursuing adaptation on in-distribution (ID) samples.
1 code implementation • 25 Sep 2023 • Minseok Choi, Hyesu Lim, Jaegul Choo
Document-level relation extraction (DocRE) aims to extract relations of all entity pairs in a document.
no code implementations • 10 Feb 2023 • Hyesu Lim, Byeonggeun Kim, Jaegul Choo, Sungha Choi
In this paper, we identify that CBN and TBN are in a trade-off relationship and present a new test-time normalization (TTN) method that interpolates the statistics by adjusting the importance between CBN and TBN according to the domain-shift sensitivity of each BN layer.
1 code implementation • EMNLP 2021 • Jimin Hong, Taehee Kim, Hyesu Lim, Jaegul Choo
During the fine-tuning phase of transfer learning, the pretrained vocabulary remains unchanged, while model parameters are updated.