Search Results for author: Hai Ye

Found 13 papers, 7 papers with code

On the Robustness of Question Rewriting Systems to Questions of Varying Hardness

1 code implementation ACL 2022 Hai Ye, Hwee Tou Ng, Wenjuan Han

In conversational question answering (CQA), the task of question rewriting~(QR) in context aims to rewrite a context-dependent question into an equivalent self-contained question that gives the same answer.

Question Rewriting

Multi-Source Test-Time Adaptation as Dueling Bandits for Extractive Question Answering

1 code implementation11 Jun 2023 Hai Ye, Qizhe Xie, Hwee Tou Ng

In this work, we study multi-source test-time model adaptation from user feedback, where K distinct models are established for adaptation.

Decision Making Extractive Question-Answering +2

Test-Time Adaptation with Perturbation Consistency Learning

no code implementations25 Apr 2023 Yi Su, Yixin Ji, Juntao Li, Hai Ye, Min Zhang

Accordingly, in this paper, we propose perturbation consistency learning (PCL), a simple test-time adaptation method to promote the model to make stable predictions for samples with distribution shifts.

Adversarial Robustness Pseudo Label +1

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

no code implementations ACL 2021 Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si

It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.

Language Modelling

Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model

1 code implementation23 Nov 2020 Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, Rui Yan

Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting.

Language Modelling Mutual Information Estimation +1

Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training

2 code implementations EMNLP 2020 Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing

To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.

Text Classification Unsupervised Domain Adaptation

Deep Ranking Based Cost-sensitive Multi-label Learning for Distant Supervision Relation Extraction

no code implementations25 Jul 2019 Hai Ye, Zhunchen Luo

Furthermore, to deal with the problem of class imbalance in distant supervision relation extraction, we further adopt cost-sensitive learning to rescale the costs from the positive and negative labels.

Information Retrieval Multi-Label Learning +3

Jointly Learning Semantic Parser and Natural Language Generator via Dual Information Maximization

no code implementations ACL 2019 Hai Ye, Wenjie Li, Lu Wang

Semantic parsing aims to transform natural language (NL) utterances into formal meaning representations (MRs), whereas an NL generator achieves the reverse: producing a NL description for some given MRs.

Code Generation Dialogue Management +2

Interpretable Rationale Augmented Charge Prediction System

no code implementations COLING 2018 Xin Jiang, Hai Ye, Zhunchen Luo, WenHan Chao, Wenjia Ma

This paper proposes a neural based system to solve the essential interpretability problem existing in text classification, especially in charge prediction task.

General Classification reinforcement-learning +3

Jointly Extracting Relations with Class Ties via Effective Deep Ranking

1 code implementation ACL 2017 Hai Ye, WenHan Chao, Zhunchen Luo, Zhoujun Li

Exploiting class ties between relations of one entity tuple will be promising for distantly supervised relation extraction.

Relation Relation Extraction

Cannot find the paper you are looking for? You can Submit a new open access paper.