1 code implementation • EMNLP 2021 • Wenxuan Zhang, Ruidan He, Haiyun Peng, Lidong Bing, Wai Lam
Many efforts have been made in solving the Aspect-based sentiment analysis (ABSA) task.
1 code implementation • EMNLP 2020 • Liying Cheng, Lidong Bing, Qian Yu, Wei Lu, Luo Si
Peer review and rebuttal, with rich interactions and argumentative discussions in between, are naturally a good resource to mine arguments.
Ranked #3 on
Argument Pair Extraction (APE)
on RR
1 code implementation • Findings (EMNLP) 2021 • Wenxuan Zhang, Yang Deng, Xin Li, Lidong Bing, Wai Lam
This motivates us to investigate the task of ABSA on QA forums (ABSA-QA), aiming to jointly detect the discussed aspects and their sentiment polarities for a given QA pair.
1 code implementation • ACL 2022 • Liying Cheng, Lidong Bing, Ruidan He, Qian Yu, Yan Zhang, Luo Si
Traditionally, a debate usually requires a manual preparation process, including reading plenty of articles, selecting the claims, identifying the stances of the claims, seeking the evidence for the claims, etc.
Claim-Evidence Pair Extraction (CEPE)
Claim Extraction with Stance Classification (CESC)
+1
1 code implementation • Findings (ACL) 2022 • Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng
Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1. 36 F1 and 1. 46 Ign_F1 score on the DocRED leaderboard.
Ranked #1 on
Relation Extraction
on DocRED
1 code implementation • Findings (ACL) 2022 • Yew Ken Chia, Lidong Bing, Soujanya Poria, Luo Si
We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE) to encourage further research in low-resource relation extraction methods.
Ranked #1 on
Zero-shot Relation Triplet Extraction
on FewRel
no code implementations • 2 Mar 2022 • Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, Wai Lam
More specifically, we provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks.
no code implementations • 15 Feb 2022 • Meng Zhou, Xin Li, Yue Jiang, Lidong Bing
Prompting shows promising results in few-shot scenarios.
no code implementations • 22 Nov 2021 • Linlin Liu, Xin Li, Ruidan He, Lidong Bing, Shafiq Joty, Luo Si
Knowledge enriched language representation learning has shown promising performance across various knowledge-intensive NLP tasks.
1 code implementation • Findings (ACL) 2022 • Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, Luo Si
A more useful text generator should leverage both the input text and the control signal to guide the generation, which can only be built with a deep understanding of the domain knowledge.
no code implementations • ACL 2022 • Bosheng Ding, Junjie Hu, Lidong Bing, Sharifah Mahani Aljunied, Shafiq Joty, Luo Si, Chunyan Miao
Much recent progress in task-oriented dialogue (ToD) systems has been driven by available annotation data across multiple domains for training.
1 code implementation • EMNLP 2021 • Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Lidong Bing, Wai Lam
Aspect-based sentiment analysis (ABSA) has been extensively studied in recent years, which typically involves four fundamental sentiment elements, including the aspect category, aspect term, opinion term, and sentiment polarity.
1 code implementation • Findings (EMNLP) 2021 • Deng Cai, Xin Li, Jackie Chun-Sing Ho, Lidong Bing, Wai Lam
We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher.
1 code implementation • ACL 2022 • Ran Zhou, Xin Li, Ruidan He, Lidong Bing, Erik Cambria, Luo Si, Chunyan Miao
Data augmentation is an effective solution to data scarcity in low-resource scenarios.
1 code implementation • ACL 2021 • Yan Zhang, Ruidan He, Zuozhu Liu, Lidong Bing, Haizhou Li
As high-quality labeled data is scarce, unsupervised sentence representation learning has attracted much attention.
1 code implementation • ACL 2021 • Liying Cheng, Tianyu Wu, Lidong Bing, Luo Si
Prior research work treats this task as a sequence labeling problem and a binary classification problem on two passages that are directly concatenated together, which has a limitation of not fully utilizing the unique characteristics and inherent relations of two different passages.
Ranked #2 on
Argument Pair Extraction (APE)
on RR
1 code implementation • ACL 2021 • Junhao Liu, Zhen Hai, Min Yang, Lidong Bing
In addition, we also devise an intra-review coherent reasoning module to identify the coherence between the text content and images of the review, which is a piece of strong evidence for review helpfulness prediction.
no code implementations • ACL 2021 • Linlin Liu, Bosheng Ding, Lidong Bing, Shafiq Joty, Luo Si, Chunyan Miao
With the source-language data as well as the translated data, a generation-based multilingual data augmentation method is introduced to further increase diversity by generating synthetic labeled data in multiple languages.
1 code implementation • ACL 2021 • Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, Wai Lam
Aspect-based sentiment analysis (ABSA) has received increasing attention recently.
Ranked #1 on
Aspect Sentiment Triplet Extraction
on ASTE-Data-V2
Aspect-Based Sentiment Analysis
Aspect Sentiment Triplet Extraction
+1
2 code implementations • ACL 2021 • Lu Xu, Yew Ken Chia, Lidong Bing
Aspect Sentiment Triplet Extraction (ASTE) is the most recent subtask of ABSA which outputs triplets of an aspect target, its associated sentiment, and the corresponding opinion term.
Ranked #2 on
Aspect Sentiment Triplet Extraction
on ASTE-Data-V2
no code implementations • ACL 2021 • Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si
It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.
1 code implementation • NAACL 2021 • Lu Xu, Zhanming Jie, Wei Lu, Lidong Bing
We believe this is because both types of features - the contextual information captured by the linear sequences and the structured information captured by the dependency trees may complement each other.
Ranked #6 on
Named Entity Recognition
on Ontonotes v5 (English)
no code implementations • 11 Mar 2021 • Linlin Liu, Thien Hai Nguyen, Shafiq Joty, Lidong Bing, Luo Si
We operationalize our framework by first proposing a novel sense-aware cross entropy loss to model word senses explicitly.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Zihao Fu, Bei Shi, Lidong Bing, Wai Lam
In our architecture, we reconstruct KB triples or texts via a closed-loop framework via linking a generator and an extractor.
no code implementations • COLING 2020 • Zihao Fu, Lidong Bing, Wai Lam, Shoaib Jameel
Recently, many KB-to-text generation tasks have been proposed to bridge the gap between knowledge bases and natural language by directly converting a group of knowledge base triples into human-readable sentences.
1 code implementation • 23 Nov 2020 • Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, Rui Yan
Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting.
no code implementations • EMNLP 2020 • Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, Chunyan Miao
Data augmentation techniques have been widely used to improve machine learning performance as they enhance the generalization capability of models.
no code implementations • 23 Oct 2020 • Xin Li, Lidong Bing, Wenxuan Zhang, Zheng Li, Wai Lam
Cross-lingual adaptation with multilingual pre-trained language models (mPTLMs) mainly consists of two lines of works: zero-shot approach and translation-based approach, which have been studied extensively on the sequence-level tasks.
1 code implementation • EMNLP 2020 • Yan Zhang, Zhijiang Guo, Zhiyang Teng, Wei Lu, Shay B. Cohen, Zuozhu Liu, Lidong Bing
With the help of these strategies, we are able to train a model with fewer parameters while maintaining the model capacity.
4 code implementations • EMNLP 2020 • Lu Xu, Hao Li, Wei Lu, Lidong Bing
Our observation is that the three elements within a triplet are highly related to each other, and this motivates us to build a joint model to extract such triplets using a sequence tagging approach.
Ranked #4 on
Aspect Sentiment Triplet Extraction
on SemEval
1 code implementation • EMNLP 2020 • Lu Xu, Lidong Bing, Wei Lu, Fei Huang
Such a design allows the model to extract aspect-specific opinion spans and then evaluate sentiment polarity by exploiting the extracted opinion features.
1 code implementation • EMNLP 2020 • Zihao Fu, Bei Shi, Wai Lam, Lidong Bing, Zhiyuan Liu
This kind of data is much easier to obtain since it can be produced automatically.
1 code implementation • EMNLP 2020 • Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, Lidong Bing
However, SBERT is trained on corpus with high-quality labeled sentence pairs, which limits its application to tasks where labeled data is extremely scarce.
Ranked #14 on
Semantic Textual Similarity
on STS15
2 code implementations • EMNLP 2020 • Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing
To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.
no code implementations • ACL 2020 • Canasai Kruengkrai, Thien Hai Nguyen, Sharifah Mahani Aljunied, Lidong Bing
Exploiting sentence-level labels, which are easy to obtain, is one of the plausible methods to improve low-resource named entity recognition (NER), where token-level labels are costly to annotate.
1 code implementation • 17 May 2020 • Juntao Li, Chang Liu, Jian Wang, Lidong Bing, Hongsong Li, Xiaozhong Liu, Dongyan Zhao, Rui Yan
We manually collect a new and high-quality paired dataset, where each pair contains an unordered product attribute set in the source language and an informative product description in the target language.
1 code implementation • EMNLP 2020 • Liying Cheng, Dekun Wu, Lidong Bing, Yan Zhang, Zhanming Jie, Wei Lu, Luo Si
Previous works on knowledge-to-text generation take as input a few RDF triples or key-value pairs conveying the knowledge of some entities to generate a natural language description.
no code implementations • 7 Apr 2020 • Piji Li, Lidong Bing, Zhongyu Wei, Wai Lam
Different from neural machine translation, in the task of text summarization, salience estimation for words, phrases or sentences is a critical component, since the output summary is a distillation of the input text.
no code implementations • 24 Feb 2020 • Rongxiang Weng, Hao-Ran Wei, Shu-Jian Huang, Heng Yu, Lidong Bing, Weihua Luo, Jia-Jun Chen
The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence.
no code implementations • ACL 2020 • Qian Yu, Lidong Bing, Qiong Zhang, Wai Lam, Luo Si
We propose an iterative learning framework for handling this challenge via adaptive transfer and augmentation of the training instances with the help of the available user-posed question-answer data.
6 code implementations • 5 Nov 2019 • Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, Luo Si
In this paper, we introduce a new subtask under ABSA, named aspect sentiment triplet extraction (ASTE).
Ranked #5 on
Aspect Sentiment Triplet Extraction
on SemEval
no code implementations • IJCNLP 2019 • Chuang Fan, Hongyu Yan, Jiachen Du, Lin Gui, Lidong Bing, Min Yang, Ruifeng Xu, Ruibin Mao
Emotion cause analysis, which aims to identify the reasons behind emotions, is a key topic in sentiment analysis.
no code implementations • IJCNLP 2019 • Kaisong Song, Lidong Bing, Wei Gao, Jun Lin, Lujun Zhao, Jiancheng Wang, Changlong Sun, Xiaozhong Liu, Qiong Zhang
Customers ask questions and customer service staffs answer their questions, which is the basic service model via multi-turn customer service (CS) dialogues on E-commerce platforms.
no code implementations • IJCNLP 2019 • Ran Le, Wenpeng Hu, Mingyue Shang, Zhenjun You, Lidong Bing, Dongyan Zhao, Rui Yan
Previous research on dialogue systems generally focuses on the conversation between two participants, yet multi-party conversations which involve more than two participants within one session bring up a more complicated but realistic scenario.
1 code implementation • IJCNLP 2019 • Zheng Li, Xin Li, Ying WEI, Lidong Bing, Yu Zhang, Qiang Yang
Joint extraction of aspects and sentiments can be effectively formulated as a sequence labeling problem.
Aspect-Based Sentiment Analysis
Unsupervised Domain Adaptation
no code implementations • IJCNLP 2019 • Jingjing Li, Yifan Gao, Lidong Bing, Irwin King, Michael R. Lyu
Question generation (QG) is the task of generating a question from a reference sentence and a specified answer within the sentence.
1 code implementation • WS 2019 • Xin Li, Lidong Bing, Wenxuan Zhang, Wai Lam
In this paper, we investigate the modeling power of contextualized embeddings from pre-trained language models, e. g. BERT, on the E2E-ABSA task.
no code implementations • IJCNLP 2019 • Mingyue Shang, Piji Li, Zhenxin Fu, Lidong Bing, Dongyan Zhao, Shuming Shi, Rui Yan
Text style transfer task requires the model to transfer a sentence of one style to another style while retaining its original content meaning, which is a challenging problem that has long suffered from the shortage of parallel data.
no code implementations • IJCNLP 2019 • Zihao Wang, Kwun Ping Lai, Piji Li, Lidong Bing, Wai Lam
Therefore, we propose a meta-learning framework that aims at handling infrequent relations with few-shot learning and uncommon entities by using textual descriptions.
1 code implementation • IJCNLP 2019 • Linlin Liu, Xiang Lin, Shafiq Joty, Simeng Han, Lidong Bing
Transition-based top-down parsing with pointer networks has achieved state-of-the-art results in multiple parsing tasks, while having a linear time complexity.
1 code implementation • NAACL 2019 • Wang Chen, Hou Pong Chan, Piji Li, Lidong Bing, Irwin King
For further exploiting the power of extraction and retrieval, we propose a neural-based merging module to combine and re-rank the predicted keyphrases from the enhanced generative model, the extractive model, and the retrieved keyphrases.
no code implementations • 6 Mar 2019 • Piji Li, ZiHao Wang, Lidong Bing, Wai Lam
In order to exploit the persona information, we propose a framework based on adversarial variational auto-encoders (aVAE) for persona modeling from the historical tips and reviews of users and items.
no code implementations • 13 Dec 2018 • Shen Gao, Xiuying Chen, Piji Li, Zhaochun Ren, Lidong Bing, Dongyan Zhao, Rui Yan
To tackle this problem, we propose the task of reader-aware abstractive summary generation, which utilizes the reader comments to help the model produce better summary about the main aspect.
Ranked #1 on
Reader-Aware Summarization
on RASG
1 code implementation • 13 Nov 2018 • Xin Li, Lidong Bing, Piji Li, Wai Lam
Target-based sentiment analysis involves opinion target extraction and target sentiment classification.
no code implementations • EMNLP 2018 • Di Chen, Jiachen Du, Lidong Bing, Ruifeng Xu
Inferring the agreement/disagreement relation in debates, especially in online debates, is one of the fundamental tasks in argumentation mining.
no code implementations • EMNLP 2018 • Jiachen Du, Wenjie Li, Yulan He, Ruifeng Xu, Lidong Bing, Xuan Wang
Combining the virtues of probability graphic models and neural networks, Conditional Variational Auto-encoder (CVAE) has shown promising performance in applications such as response generation.
1 code implementation • EMNLP 2018 • Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, Tong Zhang
For example, an input sequence could be a word sequence, such as review sentence and advertisement text.
no code implementations • EMNLP 2018 • Thanapon Noraset, Doug Downey, Lidong Bing
Recurrent neural network language models (RNNLMs) are the current standard-bearer for statistical language modeling.
2 code implementations • 8 Sep 2018 • Yifan Gao, Lidong Bing, Piji Li, Irwin King, Michael R. Lyu
We investigate the task of distractor generation for multiple choice reading comprehension questions from examinations.
no code implementations • 10 Jul 2018 • Yifan Gao, Lidong Bing, Wang Chen, Michael R. Lyu, Irwin King
We investigate the difficulty levels of questions in reading comprehension datasets such as SQuAD, and propose a new question generation setting, named Difficulty-controllable Question Generation (DQG).
no code implementations • ACL 2018 • Bei Shi, Zihao Fu, Lidong Bing, Wai Lam
Given reviews from different domains, some existing methods for word embeddings exploit sentiment information, but they cannot produce domain-sensitive embeddings.
1 code implementation • ACL 2018 • Xin Li, Lidong Bing, Wai Lam, Bei Shi
Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer.
Ranked #17 on
Aspect-Based Sentiment Analysis
on SemEval 2014 Task 4 Sub Task 2
(Laptop (Acc) metric)
1 code implementation • 2 May 2018 • Xin Li, Lidong Bing, Piji Li, Wai Lam, Zhimou Yang
Aspect Term Extraction (ATE), a key sub-task in Aspect-Based Sentiment Analysis, aims to extract explicit aspect expressions from online user reviews.
no code implementations • NeurIPS 2018 • Haitian Sun, William W. Cohen, Lidong Bing
We propose a technique for declaratively specifying strategies for semi-supervised learning (SSL).
1 code implementation • EMNLP 2018 • Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, Tong Zhang
For example, an input sequence could be a word sequence, such as review sentence and advertisement text.
no code implementations • 28 Mar 2018 • Piji Li, Lidong Bing, Wai Lam
For the critic, we combine the maximum likelihood estimator with a well designed global summary quality estimator which is a neural network based binary classifier aiming to make the generated summaries indistinguishable from the human-written ones.
2 code implementations • EMNLP 2017 • Peng Chen, Zhongqian Sun, Lidong Bing, Wei Yang
We propose a novel framework based on neural networks to identify the sentiment of opinion targets in a comment/review.
no code implementations • EMNLP 2017 • Piji Li, Wai Lam, Lidong Bing, Weiwei Guo, Hang Li
The attention weights are learned automatically by an unsupervised data reconstruction framework which can capture the sentence salience.
no code implementations • WS 2017 • Piji Li, Lidong Bing, Wai Lam
We investigate the problem of reader-aware multi-document summarization (RA-MDS) and introduce a new dataset for this problem.
1 code implementation • EMNLP 2017 • Piji Li, Wai Lam, Lidong Bing, ZiHao Wang
We propose a new framework for abstractive text summarization based on a sequence-to-sequence oriented encoder-decoder model equipped with a deep recurrent generative decoder (DRGN).
Ranked #5 on
Text Summarization
on DUC 2004 Task 1
no code implementations • 1 Aug 2017 • Piji Li, ZiHao Wang, Zhaochun Ren, Lidong Bing, Wai Lam
In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings.
no code implementations • 5 Mar 2017 • Lidong Bing, William W. Cohen, Bhuwan Dhingra
We propose a general approach to modeling semi-supervised learning (SSL) algorithms.
no code implementations • 10 Jun 2016 • Lidong Bing, Bhuwan Dhingra, Kathryn Mazaitis, Jong Hyuk Park, William W. Cohen
We propose a framework to improve performance of distantly-supervised relation extraction, by jointly learning to solve two related tasks: concept-instance extraction and relation extraction.
no code implementations • 4 Jan 2016 • Lidong Bing, Mingyang Ling, Richard C. Wang, William W. Cohen
Distant labeling for information extraction (IE) suffers from noisy training data.
no code implementations • IJCNLP 2015 • Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, Rebecca J. Passonneau
We propose an abstraction-based multi-document summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun/verb phrases.
no code implementations • 28 Apr 2015 • Piji Li, Lidong Bing, Wai Lam, Hang Li, Yi Liao
We propose a new MDS paradigm called reader-aware multi-document summarization (RA-MDS).