1 code implementation • 2 Sep 2024 • Hai Ye, Hwee Tou Ng
To enhance the reliability of LLMs in following instructions, we propose the study of selective instruction following, whereby the system declines to execute instructions if the anticipated response quality is low.
no code implementations • 22 Aug 2024 • Hai Ye, Hwee Tou Ng
It employs a tree-based generation framework to enable an efficient sampling process, which guides the direction of generation through preference and better explores the sampling space with adaptive self-refinement.
1 code implementation • ACL 2022 • Hai Ye, Hwee Tou Ng, Wenjuan Han
In conversational question answering (CQA), the task of question rewriting~(QR) in context aims to rewrite a context-dependent question into an equivalent self-contained question that gives the same answer.
1 code implementation • 11 Jun 2023 • Hai Ye, Qizhe Xie, Hwee Tou Ng
In this work, we study multi-source test-time model adaptation from user feedback, where K distinct models are established for adaptation.
no code implementations • 25 Apr 2023 • Yi Su, Yixin Ji, Juntao Li, Hai Ye, Min Zhang
Accordingly, in this paper, we propose perturbation consistency learning (PCL), a simple test-time adaptation method to promote the model to make stable predictions for samples with distribution shifts.
1 code implementation • 9 Feb 2023 • Hai Ye, Yuyang Ding, Juntao Li, Hwee Tou Ng
To answer this question, we evaluate test-time adaptation (TTA) to improve a model after deployment.
no code implementations • ACL 2021 • Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si
It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.
1 code implementation • 23 Nov 2020 • Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, Rui Yan
Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting.
2 code implementations • EMNLP 2020 • Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing
To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.
no code implementations • 25 Jul 2019 • Hai Ye, Zhunchen Luo
Furthermore, to deal with the problem of class imbalance in distant supervision relation extraction, we further adopt cost-sensitive learning to rescale the costs from the positive and negative labels.
no code implementations • ACL 2019 • Hai Ye, Wenjie Li, Lu Wang
Semantic parsing aims to transform natural language (NL) utterances into formal meaning representations (MRs), whereas an NL generator achieves the reverse: producing a NL description for some given MRs.
no code implementations • EMNLP 2018 • Hai Ye, Lu Wang
We study the problem of generating keyphrases that summarize the key points for a given document.
no code implementations • COLING 2018 • Xin Jiang, Hai Ye, Zhunchen Luo, WenHan Chao, Wenjia Ma
This paper proposes a neural based system to solve the essential interpretability problem existing in text classification, especially in charge prediction task.
1 code implementation • NAACL 2018 • Hai Ye, Xin Jiang, Zhunchen Luo, WenHan Chao
In this paper, we propose to study the problem of COURT VIEW GENeration from the fact description in a criminal case.
1 code implementation • ACL 2017 • Hai Ye, WenHan Chao, Zhunchen Luo, Zhoujun Li
Exploiting class ties between relations of one entity tuple will be promising for distantly supervised relation extraction.