no code implementations • COLING 2022 • Xiusheng Huang, Hang Yang, Yubo Chen, Jun Zhao, Kang Liu, Weijian Sun, Zuyu Zhao
Document-level relation extraction aims to recognize relations among multiple entity pairs from a whole piece of article.
1 code implementation • 31 Oct 2024 • Xiusheng Huang, Jiaxiang Liu, Yequan Wang, Kang Liu
In order to investigate the reasons for the performance decline of the edited model and optimize the editing method, this work explores the underlying reasons from both data and model perspectives.
1 code implementation • 31 Oct 2024 • Xiusheng Huang, Yequan Wang, Jun Zhao, Kang Liu
Knowledge editing technology is crucial for maintaining the accuracy and timeliness of large language models (LLMs) .
1 code implementation • 14 Apr 2023 • Yiqun Yao, Siqi Fan, Xiusheng Huang, Xuezhi Fang, Xiang Li, Ziyi Ni, Xin Jiang, Xuying Meng, Peng Han, Shuo Shang, Kang Liu, Aixin Sun, Yequan Wang
With around 14% of the one-time pre-training cost, we can accurately forecast the loss for models up to 52B.
1 code implementation • 8 Dec 2021 • Yixuan Weng, Fei Xia, Bin Li, Xiusheng Huang, Shizhu He
To address the above issue, this paper proposes an new method for acronym disambiguation, named as ADBCMM, which can significantly improve the performance of low-resource languages by building counterfactuals and multilingual mixing.
no code implementations • 29 Nov 2021 • Bin Li, Fei Xia, Yixuan Weng, Xiusheng Huang, Bin Sun
In this paper, we propose a Simple framework for Contrastive Learning of Acronym Disambiguation (SimCLAD) method to better understand the acronym meanings.
no code implementations • 29 Nov 2021 • Bin Li, Fei Xia, Yixuan Weng, Xiusheng Huang, Bin Sun, Shutao Li
In this paper, we propose a Prompt-based Sequence Generation (PSG) method for the acronym extraction task.