1 code implementation • EMNLP 2021 • Wenxuan Zhang, Ruidan He, Haiyun Peng, Lidong Bing, Wai Lam
Many efforts have been made in solving the Aspect-based sentiment analysis (ABSA) task.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+1
1 code implementation • COLING 2022 • Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng
While there is much research on cross-domain text classification, most existing approaches focus on one-to-one or many-to-one domain adaptation.
1 code implementation • ACL 2022 • Liying Cheng, Lidong Bing, Ruidan He, Qian Yu, Yan Zhang, Luo Si
Traditionally, a debate usually requires a manual preparation process, including reading plenty of articles, selecting the claims, identifying the stances of the claims, seeking the evidence for the claims, etc.
Claim-Evidence Pair Extraction (CEPE)
Claim Extraction with Stance Classification (CESC)
+1
1 code implementation • Findings (ACL) 2022 • Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng
Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1. 36 F1 and 1. 46 Ign_F1 score on the DocRED leaderboard.
Ranked #2 on
Relation Extraction
on DocRED
Document-level Relation Extraction
Knowledge Distillation
+2
1 code implementation • 22 Nov 2021 • Linlin Liu, Xin Li, Ruidan He, Lidong Bing, Shafiq Joty, Luo Si
In this work, we explore methods to make better use of the multilingual annotation and language agnostic property of KG triples, and present novel knowledge based multilingual language models (KMLMs) trained directly on the knowledge triples.
1 code implementation • ACL 2022 • Ran Zhou, Xin Li, Ruidan He, Lidong Bing, Erik Cambria, Luo Si, Chunyan Miao
Data augmentation is an effective solution to data scarcity in low-resource scenarios.
1 code implementation • ACL 2021 • Yan Zhang, Ruidan He, Zuozhu Liu, Lidong Bing, Haizhou Li
As high-quality labeled data is scarce, unsupervised sentence representation learning has attracted much attention.
no code implementations • ACL 2021 • Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si
It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.
1 code implementation • 23 Nov 2020 • Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, Rui Yan
Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting.
1 code implementation • EMNLP 2020 • Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, Lidong Bing
However, SBERT is trained on corpus with high-quality labeled sentence pairs, which limits its application to tasks where labeled data is extremely scarce.
Ranked #19 on
Semantic Textual Similarity
on STS15
2 code implementations • EMNLP 2020 • Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing
To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.
3 code implementations • ACL 2019 • Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier
Aspect-based sentiment analysis produces a list of aspect terms and their corresponding sentiments for a natural language sentence.
Aspect-Based Sentiment Analysis
Aspect-Based Sentiment Analysis (ABSA)
+4
1 code implementation • EMNLP 2018 • Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier
We consider the cross-domain sentiment classification problem, where a sentiment classifier is to be learned from a source domain and to be generalized to a target domain.
no code implementations • COLING 2018 • Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier
First, we propose a method for target representation that better captures the semantic meaning of the opinion target.
1 code implementation • ACL 2018 • Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier
Attention-based long short-term memory (LSTM) networks have proven to be useful in aspect-level sentiment classification.
3 code implementations • ACL 2017 • Ruidan He, Wee Sun Lee, Hwee Tou Ng, Daniel Dahlmeier
Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space.