1 code implementation • 25 Jul 2023 • Chengsong Huang, Qian Liu, Bill Yuchen Lin, Tianyu Pang, Chao Du, Min Lin
This paper investigates LoRA composability for cross-task generalization and introduces LoraHub, a simple framework devised for the purposive assembly of LoRA modules trained on diverse given tasks, with the objective of achieving adaptable performance on unseen tasks.
no code implementations • 14 Oct 2022 • Chenxi Gu, Chengsong Huang, Xiaoqing Zheng, Kai-Wei Chang, Cho-Jui Hsieh
Large pre-trained language models (PLMs) have proven to be a crucial component of modern natural language processing systems.
no code implementations • 29 Aug 2022 • Bill Yuchen Lin, Chengsong Huang, Qian Liu, Wenda Gu, Sam Sommerer, Xiang Ren
Language models (LMs) have demonstrated their capability in possessing commonsense knowledge of the physical world, a crucial aspect of performing tasks in everyday life.
1 code implementation • ACL 2021 • Chenhao Xie, Jiaqing Liang, Jingping Liu, Chengsong Huang, Wenhao Huang, Yanghua Xiao
Next, we formulate the problem of relation extraction into as a positive unlabeled learning task to alleviate false negative problem.
Ranked #1 on Relation Extraction on NYT11-HRL