1 code implementation • ACL 2022 • Dongha Choi, HongSeok Choi, Hyunju Lee
In this study, we propose a domain knowledge transferring (DoKTra) framework for PLMs without additional in-domain pretraining.
1 code implementation • ACL 2022 • HongSeok Choi, Dongha Choi, Hyunju Lee
The proposed method is advantageous because it does not require a separate validation set and provides a better stopping point by using a large unlabeled set.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Dongha Choi, Hyunju Lee
To estimate the data uncertainty and improve the reliability, "calibration" techniques have been applied to deep learning models.