Search Results for author: Qingyu Tan

Found 10 papers, 8 papers with code

SeaLLMs -- Large Language Models for Southeast Asia

1 code implementation1 Dec 2023 Xuan-Phi Nguyen, Wenxuan Zhang, Xin Li, Mahani Aljunied, Qingyu Tan, Liying Cheng, Guanzheng Chen, Yue Deng, Sen yang, Chaoqun Liu, Hang Zhang, Lidong Bing

Despite the remarkable achievements of large language models (LLMs) in various tasks, there remains a linguistic bias that favors high-resource languages, such as English, often at the expense of low-resource and regional languages.

Instruction Following

Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data

1 code implementation16 Jun 2023 Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng

We conducted experiments on document-level and biomedical relation extraction datasets, and the results showed that our proposed self-training framework consistently outperforms existing competitive methods on the Re-DocRED and ChemDisgene datasets when the training data are incompletely annotated.

Relation Relation Extraction +1

Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models

1 code implementation15 Jun 2023 Qingyu Tan, Hwee Tou Ng, Lidong Bing

In this paper, we introduce a comprehensive probing dataset \tempreason to evaluate the temporal reasoning capability of large language models.

Benchmarking Question Answering

Unlocking Temporal Question Answering for Large Language Models Using Code Execution

1 code implementation24 May 2023 Xingxuan Li, Liying Cheng, Qingyu Tan, Hwee Tou Ng, Shafiq Joty, Lidong Bing

Our preliminary experiments show that generating intermediate reasoning steps does not always boost the performance of complex temporal question-answering tasks.

Logical Reasoning Math +1

Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation

1 code implementation Findings (ACL) 2022 Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng

Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1. 36 F1 and 1. 46 Ign_F1 score on the DocRED leaderboard.

Document-level Relation Extraction Knowledge Distillation +2

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

no code implementations ACL 2021 Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si

It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.

Language Modelling

Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training

2 code implementations EMNLP 2020 Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing

To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.

Text Classification Unsupervised Domain Adaptation

Cannot find the paper you are looking for? You can Submit a new open access paper.