Search Results for author: Ruidan He

Found 16 papers, 14 papers with code

IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument Mining Tasks

1 code implementation ACL 2022 Liying Cheng, Lidong Bing, Ruidan He, Qian Yu, Yan Zhang, Luo Si

Traditionally, a debate usually requires a manual preparation process, including reading plenty of articles, selecting the claims, identifying the stances of the claims, seeking the evidence for the claims, etc.

Claim-Evidence Pair Extraction (CEPE) Claim Extraction with Stance Classification (CESC) +1

Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation

1 code implementation Findings (ACL) 2022 Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng

Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1. 36 F1 and 1. 46 Ign_F1 score on the DocRED leaderboard.

Document-level Relation Extraction Knowledge Distillation

Enhancing Multilingual Language Model with Massive Multilingual Knowledge Triples

1 code implementation22 Nov 2021 Linlin Liu, Xin Li, Ruidan He, Lidong Bing, Shafiq Joty, Luo Si

In this work, we explore methods to make better use of the multilingual annotation and language agnostic property of KG triples, and present novel knowledge based multilingual language models (KMLMs) trained directly on the knowledge triples.

Knowledge Graphs Language Modelling +7

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

no code implementations ACL 2021 Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si

It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.

Language Modelling

Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model

1 code implementation23 Nov 2020 Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, Rui Yan

Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting.

Language Modelling Mutual Information Estimation +1

An Unsupervised Sentence Embedding Method by Mutual Information Maximization

1 code implementation EMNLP 2020 Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, Lidong Bing

However, SBERT is trained on corpus with high-quality labeled sentence pairs, which limits its application to tasks where labeled data is extremely scarce.

Clustering Self-Supervised Learning +4

Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training

2 code implementations EMNLP 2020 Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing

To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.

Text Classification Unsupervised Domain Adaptation

Adaptive Semi-supervised Learning for Cross-domain Sentiment Classification

1 code implementation EMNLP 2018 Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier

We consider the cross-domain sentiment classification problem, where a sentiment classifier is to be learned from a source domain and to be generalized to a target domain.

Classification General Classification +2

An Unsupervised Neural Attention Model for Aspect Extraction

3 code implementations ACL 2017 Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier

Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space.

Aspect Extraction Domain Adaptation +2

Cannot find the paper you are looking for? You can Submit a new open access paper.