Many efforts have been made in solving the Aspect-based sentiment analysis (ABSA) task.
While there is much research on cross-domain text classification, most existing approaches focus on one-to-one or many-to-one domain adaptation.
Traditionally, a debate usually requires a manual preparation process, including reading plenty of articles, selecting the claims, identifying the stances of the claims, seeking the evidence for the claims, etc.
Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1. 36 F1 and 1. 46 Ign_F1 score on the DocRED leaderboard.
Ranked #2 on Relation Extraction on DocRED
In this work, we explore methods to make better use of the multilingual annotation and language agnostic property of KG triples, and present novel knowledge based multilingual language models (KMLMs) trained directly on the knowledge triples.
Data augmentation is an effective solution to data scarcity in low-resource scenarios.
As high-quality labeled data is scarce, unsupervised sentence representation learning has attracted much attention.
It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.
Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting.
However, SBERT is trained on corpus with high-quality labeled sentence pairs, which limits its application to tasks where labeled data is extremely scarce.
Ranked #20 on Semantic Textual Similarity on STS16
To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.
Aspect-based sentiment analysis produces a list of aspect terms and their corresponding sentiments for a natural language sentence.
We consider the cross-domain sentiment classification problem, where a sentiment classifier is to be learned from a source domain and to be generalized to a target domain.
First, we propose a method for target representation that better captures the semantic meaning of the opinion target.
Attention-based long short-term memory (LSTM) networks have proven to be useful in aspect-level sentiment classification.
Unlike topic models which typically assume independently generated words, word embedding models encourage words that appear in similar contexts to be located close to each other in the embedding space.