no code implementations • Findings (ACL) 2021 • Ruikun Luo, Guanhuan Huang, Xiaojun Quan
The major paradigm of applying a pre-trained language model to downstream tasks is to fine-tune it on labeled task data, which often suffers instability and low performance when the labeled examples are scarce.~One way to alleviate this problem is to apply post-training on unlabeled task data before fine-tuning, adapting the pre-trained model to target domains by contrastive learning that considers either token-level or sequence-level similarity.
no code implementations • 21 Oct 2013 • Fanyi Xiao, Ruikun Luo, Zhiding Yu
In this paper we propose a multi-task linear classifier learning problem called D-SVM (Dictionary SVM).