1 code implementation • COLING 2022 • Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng
While there is much research on cross-domain text classification, most existing approaches focus on one-to-one or many-to-one domain adaptation.
1 code implementation • 1 Dec 2023 • Xuan-Phi Nguyen, Wenxuan Zhang, Xin Li, Mahani Aljunied, Zhiqiang Hu, Chenhui Shen, Yew Ken Chia, Xingxuan Li, Jianyu Wang, Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen yang, Chaoqun Liu, Hang Zhang, Lidong Bing
Despite the remarkable achievements of large language models (LLMs) in various tasks, there remains a linguistic bias that favors high-resource languages, such as English, often at the expense of low-resource and regional languages.
1 code implementation • 16 Nov 2023 • Qingyu Tan, Hwee Tou Ng, Lidong Bing
Therefore, it is crucial for LLMs to understand the concept of temporal knowledge.
1 code implementation • 16 Jun 2023 • Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng
We conducted experiments on document-level and biomedical relation extraction datasets, and the results showed that our proposed self-training framework consistently outperforms existing competitive methods on the Re-DocRED and ChemDisgene datasets when the training data are incompletely annotated.
1 code implementation • 15 Jun 2023 • Qingyu Tan, Hwee Tou Ng, Lidong Bing
In this paper, we introduce a comprehensive probing dataset \tempreason to evaluate the temporal reasoning capability of large language models.
1 code implementation • 24 May 2023 • Xingxuan Li, Liying Cheng, Qingyu Tan, Hwee Tou Ng, Shafiq Joty, Lidong Bing
The temporal aspect is a significant dimension of our reality.
3 code implementations • 25 May 2022 • Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng, Sharifah Mahani Aljunied
We analyze the causes and effects of the overwhelming false negative problem in the DocRED dataset.
1 code implementation • Findings (ACL) 2022 • Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng
Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1. 36 F1 and 1. 46 Ign_F1 score on the DocRED leaderboard.
Ranked #2 on Relation Extraction on DocRED
Document-level Relation Extraction Knowledge Distillation +2
no code implementations • ACL 2021 • Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si
It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.
2 code implementations • EMNLP 2020 • Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing
To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.