Search Results for author: Guanhuan Huang

Found 2 papers, 0 papers with code

Bi-Granularity Contrastive Learning for Post-Training in Few-Shot Scene

no code implementations Findings (ACL) 2021 Ruikun Luo, Guanhuan Huang, Xiaojun Quan

The major paradigm of applying a pre-trained language model to downstream tasks is to fine-tune it on labeled task data, which often suffers instability and low performance when the labeled examples are scarce.~One way to alleviate this problem is to apply post-training on unlabeled task data before fine-tuning, adapting the pre-trained model to target domains by contrastive learning that considers either token-level or sequence-level similarity.

Contrastive Learning Data Augmentation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.