no code implementations • COLING 2022 • Guanhuan Huang, Xiaojun Quan, Qifan Wang
In either approach, the systems may generate a response with conflicting entity information.
no code implementations • Findings (ACL) 2021 • Ruikun Luo, Guanhuan Huang, Xiaojun Quan
The major paradigm of applying a pre-trained language model to downstream tasks is to fine-tune it on labeled task data, which often suffers instability and low performance when the labeled examples are scarce.~One way to alleviate this problem is to apply post-training on unlabeled task data before fine-tuning, adapting the pre-trained model to target domains by contrastive learning that considers either token-level or sequence-level similarity.