1 code implementation • 8 Jul 2024 • Yinquan Lu, Wenhao Zhu, Lei LI, Yu Qiao, Fei Yuan
Large Language Models (LLMs) demonstrate remarkable translation capabilities in high-resource language tasks, yet their performance in low-resource languages is hindered by insufficient multilingual data during pre-training.
1 code implementation • 20 Dec 2022 • Fei Yuan, Yinquan Lu, Wenhao Zhu, Lingpeng Kong, Lei LI, Yu Qiao, Jingjing Xu
To address the needs of learning representations for all languages in a unified space, we propose a novel efficient training recipe, upon which we build an effective detachable model, Lego-MT.
1 code implementation • 9 Sep 2021 • Yinquan Lu, Haonan Lu, Guirong Fu, Qun Liu
Incorporating factual knowledge into pre-trained language models (PLM) such as BERT is an emerging trend in recent NLP studies.
Ranked #11 on Common Sense Reasoning on ReCoRD