1 code implementation • EMNLP 2021 • Wenxuan Zhang, Ruidan He, Haiyun Peng, Lidong Bing, Wai Lam
Many efforts have been made in solving the Aspect-based sentiment analysis (ABSA) task.
Aspect-Based Sentiment Analysis (ABSA)
Cross-Lingual Transfer
1 code implementation • EMNLP 2020 • Liying Cheng, Lidong Bing, Qian Yu, Wei Lu, Luo Si
Peer review and rebuttal, with rich interactions and argumentative discussions in between, are naturally a good resource to mine arguments.
Ranked #3 on
Argument Pair Extraction (APE)
on RR
1 code implementation • COLING 2022 • Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng
While there is much research on cross-domain text classification, most existing approaches focus on one-to-one or many-to-one domain adaptation.
1 code implementation • Findings (EMNLP) 2021 • Wenxuan Zhang, Yang Deng, Xin Li, Lidong Bing, Wai Lam
This motivates us to investigate the task of ABSA on QA forums (ABSA-QA), aiming to jointly detect the discussed aspects and their sentiment polarities for a given QA pair.
no code implementations • 20 Jun 2023 • Xuan-Phi Nguyen, Sharifah Mahani Aljunied, Shafiq Joty, Lidong Bing
Large language models (LLMs) are known to effectively perform tasks by simply observing few exemplars.
1 code implementation • 16 Jun 2023 • Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng
We conducted experiments on document-level and biomedical relation extraction datasets, and the results showed that our proposed self-training framework consistently outperforms existing competitive methods on the Re-DocRED and ChemDisgene datasets when the training data are incompletely annotated.
1 code implementation • 15 Jun 2023 • Qingyu Tan, Hwee Tou Ng, Lidong Bing
In this paper, we introduce a comprehensive probing dataset \tempreason to evaluate the temporal reasoning capability of large language models.
1 code implementation • 8 Jun 2023 • Wenxuan Zhang, Sharifah Mahani Aljunied, Chang Gao, Yew Ken Chia, Lidong Bing
M3Exam exhibits three unique characteristics: (1) multilingualism, encompassing questions from multiple countries that require strong multilingual proficiency and cultural knowledge; (2) multimodality, accounting for the multimodal nature of many exam questions to test the model's multimodal understanding capability; and (3) multilevel structure, featuring exams from three critical educational periods to comprehensively assess a model's proficiency at different levels.
2 code implementations • 7 Jun 2023 • Yew Ken Chia, Pengfei Hong, Lidong Bing, Soujanya Poria
Instruction-tuned large language models have revolutionized natural language processing and have shown great potential in applications such as conversational agents.
1 code implementation • 5 Jun 2023 • Hang Zhang, Xin Li, Lidong Bing
For the second challenge, we leverage ImageBind, a universal embedding model aligning multiple modalities as the pre-trained audio encoder, and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module.
1 code implementation • 31 May 2023 • Jia Guo, Liying Cheng, Wenxuan Zhang, Stanley Kok, Xin Li, Lidong Bing
In this work, we for the first time propose a challenging argument quadruplet extraction task (AQE), which can provide an all-in-one extraction of four argumentative components, i. e., claims, evidence, evidence types, and stances.
1 code implementation • 24 May 2023 • Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, Lidong Bing
This paper aims to provide a comprehensive investigation into the capabilities of LLMs in performing various sentiment analysis tasks, from conventional sentiment classification to aspect-based sentiment analysis and multifaceted analysis of subjective texts.
1 code implementation • 24 May 2023 • Xingxuan Li, Liying Cheng, Qingyu Tan, Hwee Tou Ng, Shafiq Joty, Lidong Bing
Our preliminary experiments show that generating intermediate reasoning steps does not always boost the performance of complex temporal question-answering tasks.
1 code implementation • 24 May 2023 • Liying Cheng, Xingxuan Li, Lidong Bing
As large language models (LLMs) have demonstrated their powerful capabilities in plenty of domains and tasks, including context understanding, code generation, language generation, data storytelling, etc., many data analysts may raise concerns if their jobs will be replaced by AI.
no code implementations • 23 May 2023 • Yew Ken Chia, Hui Chen, Wei Han, Guizhen Chen, Sharifah Mahani Aljunied, Soujanya Poria, Lidong Bing
Aspect Sentiment Triplet Extraction (ASTE) is a subtask of Aspect-Based Sentiment Analysis (ABSA) that considers each opinion term, their expressed sentiment, and the corresponding aspect targets.
Aspect-Based Sentiment Analysis (ABSA)
Aspect Sentiment Triplet Extraction
+1
1 code implementation • 23 May 2023 • Weiwen Xu, Xin Li, Wai Lam, Lidong Bing
mPMR aims to guide multilingual pre-trained language models (mPLMs) to perform natural language understanding (NLU) including both sequence classification and span extraction in multiple languages.
1 code implementation • 23 May 2023 • Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, Chunyan Miao
In cross-lingual named entity recognition (NER), self-training is commonly used to bridge the linguistic gap by training on pseudo-labeled target-language data.
1 code implementation • 22 May 2023 • Lu Xu, Lidong Bing, Wei Lu
Distantly supervised named entity recognition (DS-NER) has been proposed to exploit the automatically labeled training data instead of human annotations.
1 code implementation • 22 May 2023 • Thong Nguyen, Xiaobao Wu, Xinshuai Dong, Anh Tuan Luu, Cong-Duy Nguyen, Zhen Hai, Lidong Bing
Multimodal Review Helpfulness Prediction (MRHP) aims to rank product reviews based on predicted helpfulness scores and has been widely applied in e-commerce via presenting customers with useful reviews.
no code implementations • 22 May 2023 • Chenhui Shen, Liying Cheng, Yang You, Lidong Bing
We also bring attention to the LLM's deteriorating evaluation capability with the rising qualities of summaries.
no code implementations • 22 May 2023 • Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Lidong Bing, Shafiq Joty, Soujanya Poria
We introduce Chain of Knowledge (CoK), a framework that augments large language models with structured knowledge bases to improve factual correctness and reduce hallucination.
no code implementations • 19 May 2023 • Huiming Wang, Liying Cheng, Wenxuan Zhang, De Wen Soh, Lidong Bing
Recently, data augmentation (DA) methods have been proven to be effective for pre-trained language models (PLMs) in low-resource settings, including few-shot named entity recognition (NER).
1 code implementation • 19 May 2023 • Shengqiong Wu, Hao Fei, Yixin Cao, Lidong Bing, Tat-Seng Chua
First, we represent the fine-grained semantic structures of the input image and text with the visual and textual scene graphs, which are further fused into a unified cross-modal graph (CMG).
1 code implementation • 19 May 2023 • Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing
In this work, we propose a new paradigm based on self-supervised learning to solve zero-shot text classification tasks by tuning the language models with unlabeled data, called self-supervised tuning.
1 code implementation • 18 May 2023 • Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, Tat-Seng Chua
While sentiment analysis systems try to determine the sentiment polarities of given targets based on the key opinion expressions in input texts, in implicit sentiment analysis (ISA) the opinion cues come in an implicit and obscure manner.
1 code implementation • 16 May 2023 • Chang Gao, Wenxuan Zhang, Wai Lam, Lidong Bing
Information extraction (IE) systems aim to automatically extract structured information, such as named entities, relations between entities, and events, from unstructured texts.
1 code implementation • 16 May 2023 • Yue Deng, Wenxuan Zhang, Sinno Jialin Pan, Lidong Bing
Cross-domain aspect-based sentiment analysis (ABSA) aims to perform various fine-grained sentiment analysis tasks on a target domain by transferring knowledge from a source domain.
no code implementations • 15 May 2023 • Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, Lidong Bing
Pre-trained language models (PLMs) have accomplished impressive achievements in abstractive single-document summarization (SDS).
1 code implementation • 5 May 2023 • Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei Qin, Lidong Bing
As large language models (LLMs) have become the norm in NLP, demonstrating good performance in generation and reasoning tasks, one of its most fatal disadvantages is the lack of factual correctness.
2 code implementations • 4 Apr 2023 • Zhiqiang Hu, Yihuai Lan, Lei Wang, Wanyu Xu, Ee-Peng Lim, Roy Ka-Wei Lee, Lidong Bing, Xing Xu, Soujanya Poria
To enable further research on PEFT methods of LLMs, this paper presents LLM-Adapters, an easy-to-use framework that integrates various adapters into LLMs and can execute these adapter-based PEFT methods of LLMs for different tasks.
1 code implementation • 3 Apr 2023 • Jia Guo, Stanley Kok, Lidong Bing
In addition, we introduce two new data regimes to mimic more realistic scenarios with annotation errors and evaluate our sampling strategy.
no code implementations • 20 Dec 2022 • Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Shafiq Joty, Boyang Li, Lidong Bing
In this paper, we evaluate the performance of GPT-3 as a data annotator by comparing it with traditional data annotation methods and analyzing its output on a range of tasks.
no code implementations • 20 Dec 2022 • Xingxuan Li, Yutong Li, Shafiq Joty, Linlin Liu, Fei Huang, Lin Qiu, Lidong Bing
On the basis of the findings, we recommended the application of more systematic and comprehensive psychological metrics to further evaluate and improve the safety of LLMs.
1 code implementation • 9 Dec 2022 • Weiwen Xu, Xin Li, Wenxuan Zhang, Meng Zhou, Wai Lam, Luo Si, Lidong Bing
We present Pre-trained Machine Reader (PMR), a novel method for retrofitting pre-trained masked language models (MLMs) to pre-trained machine reading comprehension (MRC) models without acquiring labeled data.
1 code implementation • 28 Nov 2022 • Zihao Fu, Haoran Yang, Anthony Man-Cho So, Wai Lam, Lidong Bing, Nigel Collier
How to choose the tunable parameters?
1 code implementation • 18 Nov 2022 • Yew Ken Chia, Lidong Bing, Sharifah Mahani Aljunied, Luo Si, Soujanya Poria
Hence, we propose CubeRE, a cube-filling model inspired by table-filling approaches and explicitly considers the interaction between relation triplets and qualifiers.
Ranked #1 on
Hyper-Relational Extraction
on HyperRED
1 code implementation • 17 Nov 2022 • Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, Luo Si, Chunyan Miao
We propose ConNER as a novel consistency training framework for cross-lingual NER, which comprises of: (1) translation-based consistency training on unlabeled target-language data, and (2) dropoutbased consistency training on labeled source-language data.
1 code implementation • 16 Nov 2022 • Linlin Liu, Xingxuan Li, Megh Thakkar, Xin Li, Shafiq Joty, Luo Si, Lidong Bing
Due to the huge amount of parameters, fine-tuning of pretrained language models (PLMs) is prone to overfitting in the low resource scenarios.
1 code implementation • 7 Nov 2022 • Thong Nguyen, Xiaobao Wu, Anh-Tuan Luu, Cong-Duy Nguyen, Zhen Hai, Lidong Bing
To overcome the aforementioned issues, we propose Multimodal Contrastive Learning for Multimodal Review Helpfulness Prediction (MRHP) problem, concentrating on mutual information between input modalities to explicitly elaborate cross-modal relations.
1 code implementation • 26 Oct 2022 • Chenhui Shen, Liying Cheng, Lidong Bing, Yang You, Luo Si
A wide range of control perspectives have been explored in controllable text generation.
1 code implementation • 18 Oct 2022 • Deng Cai, Xin Li, Jackie Chun-Sing Ho, Lidong Bing, Wai Lam
Unlike most prior work that only evaluates the ability to measure semantic similarity, we present a thorough evaluation of existing multilingual sentence embeddings and our improved versions, which include a collection of five transfer tasks in different downstream applications.
1 code implementation • 17 Oct 2022 • Weiwen Xu, Xin Li, Yang Deng, Wai Lam, Lidong Bing
Specifically, a novel Peer Data Augmentation (PeerDA) approach is proposed which employs span pairs with the PR relation as the augmentation data for training.
no code implementations • 26 Sep 2022 • Zihao Fu, Yijiang River Dong, Lidong Bing, Wai Lam
As the development of the encoder-decoder architecture, researchers are able to study the text generation tasks with broader types of data.
1 code implementation • COLING 2022 • Wei Han, Hui Chen, Zhen Hai, Soujanya Poria, Lidong Bing
With the boom of e-commerce, Multimodal Review Helpfulness Prediction (MRHP), which aims to sort product reviews according to the predicted helpfulness scores has become a research hotspot.
2 code implementations • 25 May 2022 • Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng, Sharifah Mahani Aljunied
We analyze the causes and effects of the overwhelming false negative problem in the DocRED dataset.
1 code implementation • ACL 2022 • Liying Cheng, Lidong Bing, Ruidan He, Qian Yu, Yan Zhang, Luo Si
Traditionally, a debate usually requires a manual preparation process, including reading plenty of articles, selecting the claims, identifying the stances of the claims, seeking the evidence for the claims, etc.
Claim-Evidence Pair Extraction (CEPE)
Claim Extraction with Stance Classification (CESC)
+1
1 code implementation • Findings (ACL) 2022 • Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng
Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1. 36 F1 and 1. 46 Ign_F1 score on the DocRED leaderboard.
Ranked #2 on
Relation Extraction
on DocRED
2 code implementations • Findings (ACL) 2022 • Yew Ken Chia, Lidong Bing, Soujanya Poria, Luo Si
We introduce the task setting of Zero-Shot Relation Triplet Extraction (ZeroRTE) to encourage further research in low-resource relation extraction methods.
Ranked #1 on
Zero-shot Relation Triplet Extraction
on Wiki-ZSL
1 code implementation • 2 Mar 2022 • Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, Wai Lam
More specifically, we provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks.
1 code implementation • 15 Feb 2022 • Meng Zhou, Xin Li, Yue Jiang, Lidong Bing
Prompting shows promising results in few-shot scenarios.
1 code implementation • 22 Nov 2021 • Linlin Liu, Xin Li, Ruidan He, Lidong Bing, Shafiq Joty, Luo Si
In this work, we explore methods to make better use of the multilingual annotation and language agnostic property of KG triples, and present novel knowledge based multilingual language models (KMLMs) trained directly on the knowledge triples.
1 code implementation • Findings (ACL) 2022 • Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, Luo Si
A more useful text generator should leverage both the input text and the control signal to guide the generation, which can only be built with a deep understanding of the domain knowledge.
1 code implementation • ACL 2022 • Bosheng Ding, Junjie Hu, Lidong Bing, Sharifah Mahani Aljunied, Shafiq Joty, Luo Si, Chunyan Miao
Much recent progress in task-oriented dialogue (ToD) systems has been driven by available annotation data across multiple domains for training.
1 code implementation • EMNLP 2021 • Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Lidong Bing, Wai Lam
Aspect-based sentiment analysis (ABSA) has been extensively studied in recent years, which typically involves four fundamental sentiment elements, including the aspect category, aspect term, opinion term, and sentiment polarity.
Ranked #3 on
Aspect-Based Sentiment Analysis (ABSA)
on TASD
Aspect-Based Sentiment Analysis (ABSA)
Paraphrase Generation
1 code implementation • Findings (EMNLP) 2021 • Deng Cai, Xin Li, Jackie Chun-Sing Ho, Lidong Bing, Wai Lam
We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher.
1 code implementation • ACL 2022 • Ran Zhou, Xin Li, Ruidan He, Lidong Bing, Erik Cambria, Luo Si, Chunyan Miao
Data augmentation is an effective solution to data scarcity in low-resource scenarios.
1 code implementation • ACL 2021 • Yan Zhang, Ruidan He, Zuozhu Liu, Lidong Bing, Haizhou Li
As high-quality labeled data is scarce, unsupervised sentence representation learning has attracted much attention.
1 code implementation • ACL 2021 • Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, Wai Lam
Aspect-based sentiment analysis (ABSA) has received increasing attention recently.
Ranked #4 on
Aspect Sentiment Triplet Extraction
on ASTE-Data-V2
Aspect-Based Sentiment Analysis (ABSA)
Aspect Sentiment Triplet Extraction
+1
no code implementations • ACL 2021 • Linlin Liu, Bosheng Ding, Lidong Bing, Shafiq Joty, Luo Si, Chunyan Miao
With the source-language data as well as the translated data, a generation-based multilingual data augmentation method is introduced to further increase diversity by generating synthetic labeled data in multiple languages.
1 code implementation • ACL 2021 • Junhao Liu, Zhen Hai, Min Yang, Lidong Bing
In addition, we also devise an intra-review coherent reasoning module to identify the coherence between the text content and images of the review, which is a piece of strong evidence for review helpfulness prediction.
1 code implementation • ACL 2021 • Liying Cheng, Tianyu Wu, Lidong Bing, Luo Si
Prior research work treats this task as a sequence labeling problem and a binary classification problem on two passages that are directly concatenated together, which has a limitation of not fully utilizing the unique characteristics and inherent relations of two different passages.
Ranked #2 on
Argument Pair Extraction (APE)
on RR
2 code implementations • ACL 2021 • Lu Xu, Yew Ken Chia, Lidong Bing
Aspect Sentiment Triplet Extraction (ASTE) is the most recent subtask of ABSA which outputs triplets of an aspect target, its associated sentiment, and the corresponding opinion term.
Ranked #5 on
Aspect Sentiment Triplet Extraction
on ASTE-Data-V2
no code implementations • ACL 2021 • Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si
It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.
1 code implementation • NAACL 2021 • Lu Xu, Zhanming Jie, Wei Lu, Lidong Bing
We believe this is because both types of features - the contextual information captured by the linear sequences and the structured information captured by the dependency trees may complement each other.
1 code implementation • COLING 2022 • Linlin Liu, Thien Hai Nguyen, Shafiq Joty, Lidong Bing, Luo Si
We operationalize our framework by first proposing a novel sense-aware cross entropy loss to model word senses explicitly.
no code implementations • COLING 2020 • Zihao Fu, Lidong Bing, Wai Lam, Shoaib Jameel
Recently, many KB-to-text generation tasks have been proposed to bridge the gap between knowledge bases and natural language by directly converting a group of knowledge base triples into human-readable sentences.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Zihao Fu, Bei Shi, Lidong Bing, Wai Lam
In our architecture, we reconstruct KB triples or texts via a closed-loop framework via linking a generator and an extractor.
1 code implementation • 23 Nov 2020 • Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, Rui Yan
Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting.
no code implementations • EMNLP 2020 • Bosheng Ding, Linlin Liu, Lidong Bing, Canasai Kruengkrai, Thien Hai Nguyen, Shafiq Joty, Luo Si, Chunyan Miao
Data augmentation techniques have been widely used to improve machine learning performance as they enhance the generalization capability of models.
no code implementations • 23 Oct 2020 • Xin Li, Lidong Bing, Wenxuan Zhang, Zheng Li, Wai Lam
Cross-lingual adaptation with multilingual pre-trained language models (mPTLMs) mainly consists of two lines of works: zero-shot approach and translation-based approach, which have been studied extensively on the sequence-level tasks.
1 code implementation • EMNLP 2020 • Yan Zhang, Zhijiang Guo, Zhiyang Teng, Wei Lu, Shay B. Cohen, Zuozhu Liu, Lidong Bing
With the help of these strategies, we are able to train a model with fewer parameters while maintaining the model capacity.
1 code implementation • EMNLP 2020 • Lu Xu, Lidong Bing, Wei Lu, Fei Huang
Such a design allows the model to extract aspect-specific opinion spans and then evaluate sentiment polarity by exploiting the extracted opinion features.
4 code implementations • EMNLP 2020 • Lu Xu, Hao Li, Wei Lu, Lidong Bing
Our observation is that the three elements within a triplet are highly related to each other, and this motivates us to build a joint model to extract such triplets using a sequence tagging approach.
Ranked #3 on
Aspect Sentiment Triplet Extraction
on SemEval
1 code implementation • EMNLP 2020 • Zihao Fu, Bei Shi, Wai Lam, Lidong Bing, Zhiyuan Liu
This kind of data is much easier to obtain since it can be produced automatically.
1 code implementation • EMNLP 2020 • Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, Lidong Bing
However, SBERT is trained on corpus with high-quality labeled sentence pairs, which limits its application to tasks where labeled data is extremely scarce.
Ranked #16 on
Semantic Textual Similarity
on STS16
2 code implementations • EMNLP 2020 • Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing
To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.
no code implementations • ACL 2020 • Canasai Kruengkrai, Thien Hai Nguyen, Sharifah Mahani Aljunied, Lidong Bing
Exploiting sentence-level labels, which are easy to obtain, is one of the plausible methods to improve low-resource named entity recognition (NER), where token-level labels are costly to annotate.
1 code implementation • 17 May 2020 • Juntao Li, Chang Liu, Jian Wang, Lidong Bing, Hongsong Li, Xiaozhong Liu, Dongyan Zhao, Rui Yan
We manually collect a new and high-quality paired dataset, where each pair contains an unordered product attribute set in the source language and an informative product description in the target language.
1 code implementation • EMNLP 2020 • Liying Cheng, Dekun Wu, Lidong Bing, Yan Zhang, Zhanming Jie, Wei Lu, Luo Si
Previous works on knowledge-to-text generation take as input a few RDF triples or key-value pairs conveying the knowledge of some entities to generate a natural language description.
Ranked #1 on
KG-to-Text Generation
on ENT-DESC
no code implementations • 7 Apr 2020 • Piji Li, Lidong Bing, Zhongyu Wei, Wai Lam
Different from neural machine translation, in the task of text summarization, salience estimation for words, phrases or sentences is a critical component, since the output summary is a distillation of the input text.
no code implementations • 24 Feb 2020 • Rongxiang Weng, Hao-Ran Wei, Shu-Jian Huang, Heng Yu, Lidong Bing, Weihua Luo, Jia-Jun Chen
The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence.
no code implementations • ACL 2020 • Qian Yu, Lidong Bing, Qiong Zhang, Wai Lam, Luo Si
We propose an iterative learning framework for handling this challenge via adaptive transfer and augmentation of the training instances with the help of the available user-posed question-answer data.
6 code implementations • 5 Nov 2019 • Haiyun Peng, Lu Xu, Lidong Bing, Fei Huang, Wei Lu, Luo Si
In this paper, we introduce a new subtask under ABSA, named aspect sentiment triplet extraction (ASTE).
Ranked #5 on
Aspect Sentiment Triplet Extraction
on SemEval
no code implementations • IJCNLP 2019 • Kaisong Song, Lidong Bing, Wei Gao, Jun Lin, Lujun Zhao, Jiancheng Wang, Changlong Sun, Xiaozhong Liu, Qiong Zhang
Customers ask questions and customer service staffs answer their questions, which is the basic service model via multi-turn customer service (CS) dialogues on E-commerce platforms.
no code implementations • IJCNLP 2019 • Chuang Fan, Hongyu Yan, Jiachen Du, Lin Gui, Lidong Bing, Min Yang, Ruifeng Xu, Ruibin Mao
Emotion cause analysis, which aims to identify the reasons behind emotions, is a key topic in sentiment analysis.
Ranked #2 on
Emotion Cause Extraction
on ECE
no code implementations • IJCNLP 2019 • Ran Le, Wenpeng Hu, Mingyue Shang, Zhenjun You, Lidong Bing, Dongyan Zhao, Rui Yan
Previous research on dialogue systems generally focuses on the conversation between two participants, yet multi-party conversations which involve more than two participants within one session bring up a more complicated but realistic scenario.
1 code implementation • IJCNLP 2019 • Zheng Li, Xin Li, Ying WEI, Lidong Bing, Yu Zhang, Qiang Yang
Joint extraction of aspects and sentiments can be effectively formulated as a sequence labeling problem.
Aspect-Based Sentiment Analysis (ABSA)
Unsupervised Domain Adaptation
no code implementations • IJCNLP 2019 • Jingjing Li, Yifan Gao, Lidong Bing, Irwin King, Michael R. Lyu
Question generation (QG) is the task of generating a question from a reference sentence and a specified answer within the sentence.
1 code implementation • WS 2019 • Xin Li, Lidong Bing, Wenxuan Zhang, Wai Lam
In this paper, we investigate the modeling power of contextualized embeddings from pre-trained language models, e. g. BERT, on the E2E-ABSA task.
no code implementations • IJCNLP 2019 • Mingyue Shang, Piji Li, Zhenxin Fu, Lidong Bing, Dongyan Zhao, Shuming Shi, Rui Yan
Text style transfer task requires the model to transfer a sentence of one style to another style while retaining its original content meaning, which is a challenging problem that has long suffered from the shortage of parallel data.
no code implementations • IJCNLP 2019 • Zihao Wang, Kwun Ping Lai, Piji Li, Lidong Bing, Wai Lam
Therefore, we propose a meta-learning framework that aims at handling infrequent relations with few-shot learning and uncommon entities by using textual descriptions.
1 code implementation • IJCNLP 2019 • Linlin Liu, Xiang Lin, Shafiq Joty, Simeng Han, Lidong Bing
Transition-based top-down parsing with pointer networks has achieved state-of-the-art results in multiple parsing tasks, while having a linear time complexity.
1 code implementation • NAACL 2019 • Wang Chen, Hou Pong Chan, Piji Li, Lidong Bing, Irwin King
For further exploiting the power of extraction and retrieval, we propose a neural-based merging module to combine and re-rank the predicted keyphrases from the enhanced generative model, the extractive model, and the retrieved keyphrases.
no code implementations • 6 Mar 2019 • Piji Li, ZiHao Wang, Lidong Bing, Wai Lam
In order to exploit the persona information, we propose a framework based on adversarial variational auto-encoders (aVAE) for persona modeling from the historical tips and reviews of users and items.
no code implementations • 13 Dec 2018 • Shen Gao, Xiuying Chen, Piji Li, Zhaochun Ren, Lidong Bing, Dongyan Zhao, Rui Yan
To tackle this problem, we propose the task of reader-aware abstractive summary generation, which utilizes the reader comments to help the model produce better summary about the main aspect.
Ranked #1 on
Reader-Aware Summarization
on RASG
1 code implementation • 13 Nov 2018 • Xin Li, Lidong Bing, Piji Li, Wai Lam
Target-based sentiment analysis involves opinion target extraction and target sentiment classification.
Aspect-Based Sentiment Analysis (ABSA)
Sentiment Classification
1 code implementation • EMNLP 2018 • Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, Tong Zhang
For example, an input sequence could be a word sequence, such as review sentence and advertisement text.
no code implementations • EMNLP 2018 • Di Chen, Jiachen Du, Lidong Bing, Ruifeng Xu
Inferring the agreement/disagreement relation in debates, especially in online debates, is one of the fundamental tasks in argumentation mining.
no code implementations • EMNLP 2018 • Thanapon Noraset, Doug Downey, Lidong Bing
Recurrent neural network language models (RNNLMs) are the current standard-bearer for statistical language modeling.
no code implementations • EMNLP 2018 • Jiachen Du, Wenjie Li, Yulan He, Ruifeng Xu, Lidong Bing, Xuan Wang
Combining the virtues of probability graphic models and neural networks, Conditional Variational Auto-encoder (CVAE) has shown promising performance in applications such as response generation.
2 code implementations • 8 Sep 2018 • Yifan Gao, Lidong Bing, Piji Li, Irwin King, Michael R. Lyu
We investigate the task of distractor generation for multiple choice reading comprehension questions from examinations.
no code implementations • 10 Jul 2018 • Yifan Gao, Lidong Bing, Wang Chen, Michael R. Lyu, Irwin King
We investigate the difficulty levels of questions in reading comprehension datasets such as SQuAD, and propose a new question generation setting, named Difficulty-controllable Question Generation (DQG).
no code implementations • ACL 2018 • Bei Shi, Zihao Fu, Lidong Bing, Wai Lam
Given reviews from different domains, some existing methods for word embeddings exploit sentiment information, but they cannot produce domain-sensitive embeddings.
2 code implementations • ACL 2018 • Xin Li, Lidong Bing, Wai Lam, Bei Shi
Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer.
Ranked #19 on
Aspect-Based Sentiment Analysis (ABSA)
on SemEval 2014 Task 4 Sub Task 2
(Laptop (Acc) metric)
1 code implementation • 2 May 2018 • Xin Li, Lidong Bing, Piji Li, Wai Lam, Zhimou Yang
Aspect Term Extraction (ATE), a key sub-task in Aspect-Based Sentiment Analysis, aims to extract explicit aspect expressions from online user reviews.
no code implementations • NeurIPS 2018 • Haitian Sun, William W. Cohen, Lidong Bing
We propose a technique for declaratively specifying strategies for semi-supervised learning (SSL).
1 code implementation • EMNLP 2018 • Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, Tong Zhang
For example, an input sequence could be a word sequence, such as review sentence and advertisement text.
no code implementations • 28 Mar 2018 • Piji Li, Lidong Bing, Wai Lam
For the critic, we combine the maximum likelihood estimator with a well designed global summary quality estimator which is a neural network based binary classifier aiming to make the generated summaries indistinguishable from the human-written ones.
3 code implementations • EMNLP 2017 • Peng Chen, Zhongqian Sun, Lidong Bing, Wei Yang
We propose a novel framework based on neural networks to identify the sentiment of opinion targets in a comment/review.
no code implementations • EMNLP 2017 • Piji Li, Wai Lam, Lidong Bing, Weiwei Guo, Hang Li
The attention weights are learned automatically by an unsupervised data reconstruction framework which can capture the sentence salience.
no code implementations • WS 2017 • Piji Li, Lidong Bing, Wai Lam
We investigate the problem of reader-aware multi-document summarization (RA-MDS) and introduce a new dataset for this problem.
1 code implementation • EMNLP 2017 • Piji Li, Wai Lam, Lidong Bing, ZiHao Wang
We propose a new framework for abstractive text summarization based on a sequence-to-sequence oriented encoder-decoder model equipped with a deep recurrent generative decoder (DRGN).
Ranked #5 on
Text Summarization
on DUC 2004 Task 1
no code implementations • 1 Aug 2017 • Piji Li, ZiHao Wang, Zhaochun Ren, Lidong Bing, Wai Lam
In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings.
no code implementations • 5 Mar 2017 • Lidong Bing, William W. Cohen, Bhuwan Dhingra
We propose a general approach to modeling semi-supervised learning (SSL) algorithms.
no code implementations • 10 Jun 2016 • Lidong Bing, Bhuwan Dhingra, Kathryn Mazaitis, Jong Hyuk Park, William W. Cohen
We propose a framework to improve performance of distantly-supervised relation extraction, by jointly learning to solve two related tasks: concept-instance extraction and relation extraction.
no code implementations • 4 Jan 2016 • Lidong Bing, Mingyang Ling, Richard C. Wang, William W. Cohen
Distant labeling for information extraction (IE) suffers from noisy training data.
no code implementations • IJCNLP 2015 • Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, Rebecca J. Passonneau
We propose an abstraction-based multi-document summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun/verb phrases.
no code implementations • 28 Apr 2015 • Piji Li, Lidong Bing, Wai Lam, Hang Li, Yi Liao
We propose a new MDS paradigm called reader-aware multi-document summarization (RA-MDS).