Search Results for author: Lidong Bing

Found 121 papers, 76 papers with code

Aspect-based Sentiment Analysis in Question Answering Forums

1 code implementation Findings (EMNLP) 2021 Wenxuan Zhang, Yang Deng, Xin Li, Lidong Bing, Wai Lam

This motivates us to investigate the task of ABSA on QA forums (ABSA-QA), aiming to jointly detect the discussed aspects and their sentiment polarities for a given QA pair.

Aspect-Based Sentiment Analysis (ABSA) Question Answering

Class-Adaptive Self-Training for Relation Extraction with Incompletely Annotated Training Data

1 code implementation16 Jun 2023 Qingyu Tan, Lu Xu, Lidong Bing, Hwee Tou Ng

We conducted experiments on document-level and biomedical relation extraction datasets, and the results showed that our proposed self-training framework consistently outperforms existing competitive methods on the Re-DocRED and ChemDisgene datasets when the training data are incompletely annotated.

Relation Extraction

Towards Benchmarking and Improving the Temporal Reasoning Capability of Large Language Models

1 code implementation15 Jun 2023 Qingyu Tan, Hwee Tou Ng, Lidong Bing

In this paper, we introduce a comprehensive probing dataset \tempreason to evaluate the temporal reasoning capability of large language models.

Benchmarking Question Answering

M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models

1 code implementation8 Jun 2023 Wenxuan Zhang, Sharifah Mahani Aljunied, Chang Gao, Yew Ken Chia, Lidong Bing

M3Exam exhibits three unique characteristics: (1) multilingualism, encompassing questions from multiple countries that require strong multilingual proficiency and cultural knowledge; (2) multimodality, accounting for the multimodal nature of many exam questions to test the model's multimodal understanding capability; and (3) multilevel structure, featuring exams from three critical educational periods to comprehensively assess a model's proficiency at different levels.

INSTRUCTEVAL: Towards Holistic Evaluation of Instruction-Tuned Large Language Models

2 code implementations7 Jun 2023 Yew Ken Chia, Pengfei Hong, Lidong Bing, Soujanya Poria

Instruction-tuned large language models have revolutionized natural language processing and have shown great potential in applications such as conversational agents.

Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding

1 code implementation5 Jun 2023 Hang Zhang, Xin Li, Lidong Bing

For the second challenge, we leverage ImageBind, a universal embedding model aligning multiple modalities as the pre-trained audio encoder, and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module.

Language Modelling Text Generation +7

AQE: Argument Quadruplet Extraction via a Quad-Tagging Augmented Generative Approach

1 code implementation31 May 2023 Jia Guo, Liying Cheng, Wenxuan Zhang, Stanley Kok, Xin Li, Lidong Bing

In this work, we for the first time propose a challenging argument quadruplet extraction task (AQE), which can provide an all-in-one extraction of four argumentative components, i. e., claims, evidence, evidence types, and stances.

Argument Mining Stance Classification

Sentiment Analysis in the Era of Large Language Models: A Reality Check

1 code implementation24 May 2023 Wenxuan Zhang, Yue Deng, Bing Liu, Sinno Jialin Pan, Lidong Bing

This paper aims to provide a comprehensive investigation into the capabilities of LLMs in performing various sentiment analysis tasks, from conventional sentiment classification to aspect-based sentiment analysis and multifaceted analysis of subjective texts.

Few-Shot Learning Sentiment Analysis +1

Unlocking Temporal Question Answering for Large Language Models Using Code Execution

1 code implementation24 May 2023 Xingxuan Li, Liying Cheng, Qingyu Tan, Hwee Tou Ng, Shafiq Joty, Lidong Bing

Our preliminary experiments show that generating intermediate reasoning steps does not always boost the performance of complex temporal question-answering tasks.

Logical Reasoning Question Answering

Is GPT-4 a Good Data Analyst?

1 code implementation24 May 2023 Liying Cheng, Xingxuan Li, Lidong Bing

As large language models (LLMs) have demonstrated their powerful capabilities in plenty of domains and tasks, including context understanding, code generation, language generation, data storytelling, etc., many data analysts may raise concerns if their jobs will be replaced by AI.

Code Generation Text Generation

Domain-Expanded ASTE: Rethinking Generalization in Aspect Sentiment Triplet Extraction

no code implementations23 May 2023 Yew Ken Chia, Hui Chen, Wei Han, Guizhen Chen, Sharifah Mahani Aljunied, Soujanya Poria, Lidong Bing

Aspect Sentiment Triplet Extraction (ASTE) is a subtask of Aspect-Based Sentiment Analysis (ABSA) that considers each opinion term, their expressed sentiment, and the corresponding aspect targets.

Aspect-Based Sentiment Analysis (ABSA) Aspect Sentiment Triplet Extraction +1

mPMR: A Multilingual Pre-trained Machine Reader at Scale

1 code implementation23 May 2023 Weiwen Xu, Xin Li, Wai Lam, Lidong Bing

mPMR aims to guide multilingual pre-trained language models (mPLMs) to perform natural language understanding (NLU) including both sequence classification and span extraction in multiple languages.

Classification Machine Reading Comprehension +2

Improving Self-training for Cross-lingual Named Entity Recognition with Contrastive and Prototype Learning

1 code implementation23 May 2023 Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, Chunyan Miao

In cross-lingual named entity recognition (NER), self-training is commonly used to bridge the linguistic gap by training on pseudo-labeled target-language data.

Cross-Lingual NER named-entity-recognition +4

Better Sampling of Negatives for Distantly Supervised Named Entity Recognition

1 code implementation22 May 2023 Lu Xu, Lidong Bing, Wei Lu

Distantly supervised named entity recognition (DS-NER) has been proposed to exploit the automatically labeled training data instead of human annotations.

named-entity-recognition Named Entity Recognition +1

Gradient-Boosted Decision Tree for Listwise Context Model in Multimodal Review Helpfulness Prediction

1 code implementation22 May 2023 Thong Nguyen, Xiaobao Wu, Xinshuai Dong, Anh Tuan Luu, Cong-Duy Nguyen, Zhen Hai, Lidong Bing

Multimodal Review Helpfulness Prediction (MRHP) aims to rank product reviews based on predicted helpfulness scores and has been widely applied in e-commerce via presenting customers with useful reviews.

Are Large Language Models Good Evaluators for Abstractive Summarization?

no code implementations22 May 2023 Chenhui Shen, Liying Cheng, Yang You, Lidong Bing

We also bring attention to the LLM's deteriorating evaluation capability with the rising qualities of summaries.

Abstractive Text Summarization

Chain of Knowledge: A Framework for Grounding Large Language Models with Structured Knowledge Bases

no code implementations22 May 2023 Xingxuan Li, Ruochen Zhao, Yew Ken Chia, Bosheng Ding, Lidong Bing, Shafiq Joty, Soujanya Poria

We introduce Chain of Knowledge (CoK), a framework that augments large language models with structured knowledge bases to improve factual correctness and reduce hallucination.

Language Modelling Large Language Model

Enhancing Few-shot NER with Prompt Ordering based Data Augmentation

no code implementations19 May 2023 Huiming Wang, Liying Cheng, Wenxuan Zhang, De Wen Soh, Lidong Bing

Recently, data augmentation (DA) methods have been proven to be effective for pre-trained language models (PLMs) in low-resource settings, including few-shot named entity recognition (NER).

Data Augmentation few-shot-ner +4

Information Screening whilst Exploiting! Multimodal Relation Extraction with Feature Denoising and Multimodal Topic Modeling

1 code implementation19 May 2023 Shengqiong Wu, Hao Fei, Yixin Cao, Lidong Bing, Tat-Seng Chua

First, we represent the fine-grained semantic structures of the input image and text with the visual and textual scene graphs, which are further fused into a unified cross-modal graph (CMG).

Denoising Relation Extraction

Zero-Shot Text Classification via Self-Supervised Tuning

1 code implementation19 May 2023 Chaoqun Liu, Wenxuan Zhang, Guizhen Chen, Xiaobao Wu, Anh Tuan Luu, Chip Hong Chang, Lidong Bing

In this work, we propose a new paradigm based on self-supervised learning to solve zero-shot text classification tasks by tuning the language models with unlabeled data, called self-supervised tuning.

Self-Supervised Learning Sentiment Analysis +4

Reasoning Implicit Sentiment with Chain-of-Thought Prompting

1 code implementation18 May 2023 Hao Fei, Bobo Li, Qian Liu, Lidong Bing, Fei Li, Tat-Seng Chua

While sentiment analysis systems try to determine the sentiment polarities of given targets based on the key opinion expressions in input texts, in implicit sentiment analysis (ISA) the opinion cues come in an implicit and obscure manner.

Common Sense Reasoning Sentiment Analysis

Easy-to-Hard Learning for Information Extraction

1 code implementation16 May 2023 Chang Gao, Wenxuan Zhang, Wai Lam, Lidong Bing

Information extraction (IE) systems aim to automatically extract structured information, such as named entities, relations between entities, and events, from unstructured texts.

Bidirectional Generative Framework for Cross-domain Aspect-based Sentiment Analysis

1 code implementation16 May 2023 Yue Deng, Wenxuan Zhang, Sinno Jialin Pan, Lidong Bing

Cross-domain aspect-based sentiment analysis (ABSA) aims to perform various fine-grained sentiment analysis tasks on a target domain by transferring knowledge from a source domain.

Aspect-Based Sentiment Analysis (ABSA) Data Augmentation +1

A Hierarchical Encoding-Decoding Scheme for Abstractive Multi-document Summarization

no code implementations15 May 2023 Chenhui Shen, Liying Cheng, Xuan-Phi Nguyen, Yang You, Lidong Bing

Pre-trained language models (PLMs) have accomplished impressive achievements in abstractive single-document summarization (SDS).

Document Summarization Multi-Document Summarization

Verify-and-Edit: A Knowledge-Enhanced Chain-of-Thought Framework

1 code implementation5 May 2023 Ruochen Zhao, Xingxuan Li, Shafiq Joty, Chengwei Qin, Lidong Bing

As large language models (LLMs) have become the norm in NLP, demonstrating good performance in generation and reasoning tasks, one of its most fatal disadvantages is the lack of factual correctness.

Open-Domain Question Answering

LLM-Adapters: An Adapter Family for Parameter-Efficient Fine-Tuning of Large Language Models

2 code implementations4 Apr 2023 Zhiqiang Hu, Yihuai Lan, Lei Wang, Wanyu Xu, Ee-Peng Lim, Roy Ka-Wei Lee, Lidong Bing, Xing Xu, Soujanya Poria

To enable further research on PEFT methods of LLMs, this paper presents LLM-Adapters, an easy-to-use framework that integrates various adapters into LLMs and can execute these adapter-based PEFT methods of LLMs for different tasks.

Language Modelling

Towards Integration of Discriminability and Robustness for Document-Level Relation Extraction

1 code implementation3 Apr 2023 Jia Guo, Stanley Kok, Lidong Bing

In addition, we introduce two new data regimes to mimic more realistic scenarios with annotation errors and evaluate our sampling strategy.

Contrastive Learning Document-level Relation Extraction +1

Is GPT-3 a Good Data Annotator?

no code implementations20 Dec 2022 Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Shafiq Joty, Boyang Li, Lidong Bing

In this paper, we evaluate the performance of GPT-3 as a data annotator by comparing it with traditional data annotation methods and analyzing its output on a range of tasks.

Language Modelling

Does GPT-3 Demonstrate Psychopathy? Evaluating Large Language Models from a Psychological Perspective

no code implementations20 Dec 2022 Xingxuan Li, Yutong Li, Shafiq Joty, Linlin Liu, Fei Huang, Lin Qiu, Lidong Bing

On the basis of the findings, we recommended the application of more systematic and comprehensive psychological metrics to further evaluate and improve the safety of LLMs.

From Clozing to Comprehending: Retrofitting Pre-trained Masked Language Model to Pre-trained Machine Reader

1 code implementation9 Dec 2022 Weiwen Xu, Xin Li, Wenxuan Zhang, Meng Zhou, Wai Lam, Luo Si, Lidong Bing

We present Pre-trained Machine Reader (PMR), a novel method for retrofitting pre-trained masked language models (MLMs) to pre-trained machine reading comprehension (MRC) models without acquiring labeled data.

Classification Extractive Question-Answering +6

A Dataset for Hyper-Relational Extraction and a Cube-Filling Approach

1 code implementation18 Nov 2022 Yew Ken Chia, Lidong Bing, Sharifah Mahani Aljunied, Luo Si, Soujanya Poria

Hence, we propose CubeRE, a cube-filling model inspired by table-filling approaches and explicitly considers the interaction between relation triplets and qualifiers.

graph construction Hyper-Relational Extraction

ConNER: Consistency Training for Cross-lingual Named Entity Recognition

1 code implementation17 Nov 2022 Ran Zhou, Xin Li, Lidong Bing, Erik Cambria, Luo Si, Chunyan Miao

We propose ConNER as a novel consistency training framework for cross-lingual NER, which comprises of: (1) translation-based consistency training on unlabeled target-language data, and (2) dropoutbased consistency training on labeled source-language data.

Cross-Lingual NER Knowledge Distillation +3

Towards Robust Low-Resource Fine-Tuning with Multi-View Compressed Representations

1 code implementation16 Nov 2022 Linlin Liu, Xingxuan Li, Megh Thakkar, Xin Li, Shafiq Joty, Luo Si, Lidong Bing

Due to the huge amount of parameters, fine-tuning of pretrained language models (PLMs) is prone to overfitting in the low resource scenarios.

Adaptive Contrastive Learning on Multimodal Transformer for Review Helpfulness Predictions

1 code implementation7 Nov 2022 Thong Nguyen, Xiaobao Wu, Anh-Tuan Luu, Cong-Duy Nguyen, Zhen Hai, Lidong Bing

To overcome the aforementioned issues, we propose Multimodal Contrastive Learning for Multimodal Review Helpfulness Prediction (MRHP) problem, concentrating on mutual information between input modalities to explicitly elaborate cross-modal relations.

Contrastive Learning

SentBS: Sentence-level Beam Search for Controllable Summarization

1 code implementation26 Oct 2022 Chenhui Shen, Liying Cheng, Lidong Bing, Yang You, Luo Si

A wide range of control perspectives have been explored in controllable text generation.

Text Generation

Retrofitting Multilingual Sentence Embeddings with Abstract Meaning Representation

1 code implementation18 Oct 2022 Deng Cai, Xin Li, Jackie Chun-Sing Ho, Lidong Bing, Wai Lam

Unlike most prior work that only evaluates the ability to measure semantic similarity, we present a thorough evaluation of existing multilingual sentence embeddings and our improved versions, which include a collection of five transfer tasks in different downstream applications.

Semantic Similarity Semantic Textual Similarity +1

PeerDA: Data Augmentation via Modeling Peer Relation for Span Identification Tasks

1 code implementation17 Oct 2022 Weiwen Xu, Xin Li, Yang Deng, Wai Lam, Lidong Bing

Specifically, a novel Peer Data Augmentation (PeerDA) approach is proposed which employs span pairs with the PR relation as the augmentation data for training.

Data Augmentation

Informative Text Generation from Knowledge Triples

no code implementations26 Sep 2022 Zihao Fu, Yijiang River Dong, Lidong Bing, Wai Lam

As the development of the encoder-decoder architecture, researchers are able to study the text generation tasks with broader types of data.

Text Generation

SANCL: Multimodal Review Helpfulness Prediction with Selective Attention and Natural Contrastive Learning

1 code implementation COLING 2022 Wei Han, Hui Chen, Zhen Hai, Soujanya Poria, Lidong Bing

With the boom of e-commerce, Multimodal Review Helpfulness Prediction (MRHP), which aims to sort product reviews according to the predicted helpfulness scores has become a research hotspot.

Contrastive Learning

IAM: A Comprehensive and Large-Scale Dataset for Integrated Argument Mining Tasks

1 code implementation ACL 2022 Liying Cheng, Lidong Bing, Ruidan He, Qian Yu, Yan Zhang, Luo Si

Traditionally, a debate usually requires a manual preparation process, including reading plenty of articles, selecting the claims, identifying the stances of the claims, seeking the evidence for the claims, etc.

Claim-Evidence Pair Extraction (CEPE) Claim Extraction with Stance Classification (CESC) +1

Document-Level Relation Extraction with Adaptive Focal Loss and Knowledge Distillation

1 code implementation Findings (ACL) 2022 Qingyu Tan, Ruidan He, Lidong Bing, Hwee Tou Ng

Our model consistently outperforms strong baselines and its performance exceeds the previous SOTA by 1. 36 F1 and 1. 46 Ign_F1 score on the DocRED leaderboard.

Document-level Relation Extraction Knowledge Distillation

A Survey on Aspect-Based Sentiment Analysis: Tasks, Methods, and Challenges

1 code implementation2 Mar 2022 Wenxuan Zhang, Xin Li, Yang Deng, Lidong Bing, Wai Lam

More specifically, we provide a new taxonomy for ABSA which organizes existing studies from the axes of concerned sentiment elements, with an emphasis on recent advances of compound ABSA tasks.

Aspect-Based Sentiment Analysis (ABSA)

Enhancing Multilingual Language Model with Massive Multilingual Knowledge Triples

1 code implementation22 Nov 2021 Linlin Liu, Xin Li, Ruidan He, Lidong Bing, Shafiq Joty, Luo Si

In this work, we explore methods to make better use of the multilingual annotation and language agnostic property of KG triples, and present novel knowledge based multilingual language models (KMLMs) trained directly on the knowledge triples.

Knowledge Graphs Language Modelling +7

MReD: A Meta-Review Dataset for Structure-Controllable Text Generation

1 code implementation Findings (ACL) 2022 Chenhui Shen, Liying Cheng, Ran Zhou, Lidong Bing, Yang You, Luo Si

A more useful text generator should leverage both the input text and the control signal to guide the generation, which can only be built with a deep understanding of the domain knowledge.

Text Generation Text Summarization

Aspect Sentiment Quad Prediction as Paraphrase Generation

1 code implementation EMNLP 2021 Wenxuan Zhang, Yang Deng, Xin Li, Yifei Yuan, Lidong Bing, Wai Lam

Aspect-based sentiment analysis (ABSA) has been extensively studied in recent years, which typically involves four fundamental sentiment elements, including the aspect category, aspect term, opinion term, and sentiment polarity.

Aspect-Based Sentiment Analysis (ABSA) Paraphrase Generation

Multilingual AMR Parsing with Noisy Knowledge Distillation

1 code implementation Findings (EMNLP) 2021 Deng Cai, Xin Li, Jackie Chun-Sing Ho, Lidong Bing, Wai Lam

We study multilingual AMR parsing from the perspective of knowledge distillation, where the aim is to learn and improve a multilingual AMR parser by using an existing English parser as its teacher.

AMR Parsing Knowledge Distillation

MulDA: A Multilingual Data Augmentation Framework for Low-Resource Cross-Lingual NER

no code implementations ACL 2021 Linlin Liu, Bosheng Ding, Lidong Bing, Shafiq Joty, Luo Si, Chunyan Miao

With the source-language data as well as the translated data, a generation-based multilingual data augmentation method is introduced to further increase diversity by generating synthetic labeled data in multiple languages.

Cross-Lingual NER Data Augmentation +5

Multi-perspective Coherent Reasoning for Helpfulness Prediction of Multimodal Reviews

1 code implementation ACL 2021 Junhao Liu, Zhen Hai, Min Yang, Lidong Bing

In addition, we also devise an intra-review coherent reasoning module to identify the coherence between the text content and images of the review, which is a piece of strong evidence for review helpfulness prediction.

Argument Pair Extraction via Attention-guided Multi-Layer Multi-Cross Encoding

1 code implementation ACL 2021 Liying Cheng, Tianyu Wu, Lidong Bing, Luo Si

Prior research work treats this task as a sequence labeling problem and a binary classification problem on two passages that are directly concatenated together, which has a limitation of not fully utilizing the unique characteristics and inherent relations of two different passages.

Argument Pair Extraction (APE) Binary Classification

Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction

2 code implementations ACL 2021 Lu Xu, Yew Ken Chia, Lidong Bing

Aspect Sentiment Triplet Extraction (ASTE) is the most recent subtask of ABSA which outputs triplets of an aspect target, its associated sentiment, and the corresponding opinion term.

Aspect Sentiment Triplet Extraction Term Extraction

On the Effectiveness of Adapter-based Tuning for Pretrained Language Model Adaptation

no code implementations ACL 2021 Ruidan He, Linlin Liu, Hai Ye, Qingyu Tan, Bosheng Ding, Liying Cheng, Jia-Wei Low, Lidong Bing, Luo Si

It works by adding light-weight adapter modules to a pretrained language model (PrLM) and only updating the parameters of adapter modules when learning on a downstream task.

Language Modelling

Better Feature Integration for Named Entity Recognition

1 code implementation NAACL 2021 Lu Xu, Zhanming Jie, Wei Lu, Lidong Bing

We believe this is because both types of features - the contextual information captured by the linear sequences and the structured information captured by the dependency trees may complement each other.

named-entity-recognition Named Entity Recognition +1

Dynamic Topic Tracker for KB-to-Text Generation

no code implementations COLING 2020 Zihao Fu, Lidong Bing, Wai Lam, Shoaib Jameel

Recently, many KB-to-text generation tasks have been proposed to bridge the gap between knowledge bases and natural language by directly converting a group of knowledge base triples into human-readable sentences.

Text Generation

Unsupervised Domain Adaptation of a Pretrained Cross-Lingual Language Model

1 code implementation23 Nov 2020 Juntao Li, Ruidan He, Hai Ye, Hwee Tou Ng, Lidong Bing, Rui Yan

Experimental results show that our proposed method achieves significant performance improvements over the state-of-the-art pretrained cross-lingual language model in the CLCD setting.

Language Modelling Mutual Information Estimation +1

Unsupervised Cross-lingual Adaptation for Sequence Tagging and Beyond

no code implementations23 Oct 2020 Xin Li, Lidong Bing, Wenxuan Zhang, Zheng Li, Wai Lam

Cross-lingual adaptation with multilingual pre-trained language models (mPTLMs) mainly consists of two lines of works: zero-shot approach and translation-based approach, which have been studied extensively on the sequence-level tasks.

Cross-Lingual Transfer Translation

Aspect Based Sentiment Analysis with Aspect-Specific Opinion Spans

1 code implementation EMNLP 2020 Lu Xu, Lidong Bing, Wei Lu, Fei Huang

Such a design allows the model to extract aspect-specific opinion spans and then evaluate sentiment polarity by exploiting the extracted opinion features.

Extract Aspect

Position-Aware Tagging for Aspect Sentiment Triplet Extraction

4 code implementations EMNLP 2020 Lu Xu, Hao Li, Wei Lu, Lidong Bing

Our observation is that the three elements within a triplet are highly related to each other, and this motivates us to build a joint model to extract such triplets using a sequence tagging approach.

Aspect Sentiment Triplet Extraction

An Unsupervised Sentence Embedding Method by Mutual Information Maximization

1 code implementation EMNLP 2020 Yan Zhang, Ruidan He, Zuozhu Liu, Kwan Hui Lim, Lidong Bing

However, SBERT is trained on corpus with high-quality labeled sentence pairs, which limits its application to tasks where labeled data is extremely scarce.

Clustering Self-Supervised Learning +4

Feature Adaptation of Pre-Trained Language Models across Languages and Domains with Robust Self-Training

2 code implementations EMNLP 2020 Hai Ye, Qingyu Tan, Ruidan He, Juntao Li, Hwee Tou Ng, Lidong Bing

To improve the robustness of self-training, in this paper we present class-aware feature self-distillation (CFd) to learn discriminative features from PrLMs, in which PrLM features are self-distilled into a feature adaptation module and the features from the same class are more tightly clustered.

Text Classification Unsupervised Domain Adaptation

Improving Low-Resource Named Entity Recognition using Joint Sentence and Token Labeling

no code implementations ACL 2020 Canasai Kruengkrai, Thien Hai Nguyen, Sharifah Mahani Aljunied, Lidong Bing

Exploiting sentence-level labels, which are easy to obtain, is one of the plausible methods to improve low-resource named entity recognition (NER), where token-level labels are costly to annotate.

Binary Classification Classification +6

Cross-Lingual Low-Resource Set-to-Description Retrieval for Global E-Commerce

1 code implementation17 May 2020 Juntao Li, Chang Liu, Jian Wang, Lidong Bing, Hongsong Li, Xiaozhong Liu, Dongyan Zhao, Rui Yan

We manually collect a new and high-quality paired dataset, where each pair contains an unordered product attribute set in the source language and an informative product description in the target language.

Cross-Lingual Information Retrieval Retrieval

ENT-DESC: Entity Description Generation by Exploring Knowledge Graph

1 code implementation EMNLP 2020 Liying Cheng, Dekun Wu, Lidong Bing, Yan Zhang, Zhanming Jie, Wei Lu, Luo Si

Previous works on knowledge-to-text generation take as input a few RDF triples or key-value pairs conveying the knowledge of some entities to generate a natural language description.

Graph-to-Sequence KG-to-Text Generation +2

Salience Estimation with Multi-Attention Learning for Abstractive Text Summarization

no code implementations7 Apr 2020 Piji Li, Lidong Bing, Zhongyu Wei, Wai Lam

Different from neural machine translation, in the task of text summarization, salience estimation for words, phrases or sentences is a critical component, since the output summary is a distillation of the input text.

Abstractive Text Summarization Machine Translation +1

GRET: Global Representation Enhanced Transformer

no code implementations24 Feb 2020 Rongxiang Weng, Hao-Ran Wei, Shu-Jian Huang, Heng Yu, Lidong Bing, Weihua Luo, Jia-Jun Chen

The encoder maps the words in the input sentence into a sequence of hidden states, which are then fed into the decoder to generate the output sentence.

Machine Translation Text Generation +2

Review-based Question Generation with Adaptive Instance Transfer and Augmentation

no code implementations ACL 2020 Qian Yu, Lidong Bing, Qiong Zhang, Wai Lam, Luo Si

We propose an iterative learning framework for handling this challenge via adaptive transfer and augmentation of the training instances with the help of the available user-posed question-answer data.

Question Generation Question-Generation

Using Customer Service Dialogues for Satisfaction Analysis with Context-Assisted Multiple Instance Learning

no code implementations IJCNLP 2019 Kaisong Song, Lidong Bing, Wei Gao, Jun Lin, Lujun Zhao, Jiancheng Wang, Changlong Sun, Xiaozhong Liu, Qiong Zhang

Customers ask questions and customer service staffs answer their questions, which is the basic service model via multi-turn customer service (CS) dialogues on E-commerce platforms.

Multiple Instance Learning

Who Is Speaking to Whom? Learning to Identify Utterance Addressee in Multi-Party Conversations

no code implementations IJCNLP 2019 Ran Le, Wenpeng Hu, Mingyue Shang, Zhenjun You, Lidong Bing, Dongyan Zhao, Rui Yan

Previous research on dialogue systems generally focuses on the conversation between two participants, yet multi-party conversations which involve more than two participants within one session bring up a more complicated but realistic scenario.

Improving Question Generation With to the Point Context

no code implementations IJCNLP 2019 Jingjing Li, Yifan Gao, Lidong Bing, Irwin King, Michael R. Lyu

Question generation (QG) is the task of generating a question from a reference sentence and a specified answer within the sentence.

Question Generation Question-Generation

Semi-supervised Text Style Transfer: Cross Projection in Latent Space

no code implementations IJCNLP 2019 Mingyue Shang, Piji Li, Zhenxin Fu, Lidong Bing, Dongyan Zhao, Shuming Shi, Rui Yan

Text style transfer task requires the model to transfer a sentence of one style to another style while retaining its original content meaning, which is a challenging problem that has long suffered from the shortage of parallel data.

Style Transfer Text Style Transfer

Tackling Long-Tailed Relations and Uncommon Entities in Knowledge Graph Completion

no code implementations IJCNLP 2019 Zihao Wang, Kwun Ping Lai, Piji Li, Lidong Bing, Wai Lam

Therefore, we propose a meta-learning framework that aims at handling infrequent relations with few-shot learning and uncommon entities by using textual descriptions.

Few-Shot Learning

Hierarchical Pointer Net Parsing

1 code implementation IJCNLP 2019 Linlin Liu, Xiang Lin, Shafiq Joty, Simeng Han, Lidong Bing

Transition-based top-down parsing with pointer networks has achieved state-of-the-art results in multiple parsing tasks, while having a linear time complexity.

Discourse Parsing Inductive Bias

An Integrated Approach for Keyphrase Generation via Exploring the Power of Retrieval and Extraction

1 code implementation NAACL 2019 Wang Chen, Hou Pong Chan, Piji Li, Lidong Bing, Irwin King

For further exploiting the power of extraction and retrieval, we propose a neural-based merging module to combine and re-rank the predicted keyphrases from the enhanced generative model, the extractive model, and the retrieved keyphrases.

Keyphrase Generation Multi-Task Learning +1

Persona-Aware Tips Generation

no code implementations6 Mar 2019 Piji Li, ZiHao Wang, Lidong Bing, Wai Lam

In order to exploit the persona information, we propose a framework based on adversarial variational auto-encoders (aVAE) for persona modeling from the historical tips and reviews of users and items.

Abstractive Text Summarization by Incorporating Reader Comments

no code implementations13 Dec 2018 Shen Gao, Xiuying Chen, Piji Li, Zhaochun Ren, Lidong Bing, Dongyan Zhao, Rui Yan

To tackle this problem, we propose the task of reader-aware abstractive summary generation, which utilizes the reader comments to help the model produce better summary about the main aspect.

Reader-Aware Summarization

Hybrid Neural Attention for Agreement/Disagreement Inference in Online Debates

no code implementations EMNLP 2018 Di Chen, Jiachen Du, Lidong Bing, Ruifeng Xu

Inferring the agreement/disagreement relation in debates, especially in online debates, is one of the fundamental tasks in argumentation mining.

Natural Language Inference Sentiment Analysis

Variational Autoregressive Decoder for Neural Response Generation

no code implementations EMNLP 2018 Jiachen Du, Wenjie Li, Yulan He, Ruifeng Xu, Lidong Bing, Xuan Wang

Combining the virtues of probability graphic models and neural networks, Conditional Variational Auto-encoder (CVAE) has shown promising performance in applications such as response generation.

Response Generation

Generating Distractors for Reading Comprehension Questions from Real Examinations

2 code implementations8 Sep 2018 Yifan Gao, Lidong Bing, Piji Li, Irwin King, Michael R. Lyu

We investigate the task of distractor generation for multiple choice reading comprehension questions from examinations.

Distractor Generation Multiple-choice +1

Difficulty Controllable Generation of Reading Comprehension Questions

no code implementations10 Jul 2018 Yifan Gao, Lidong Bing, Wang Chen, Michael R. Lyu, Irwin King

We investigate the difficulty levels of questions in reading comprehension datasets such as SQuAD, and propose a new question generation setting, named Difficulty-controllable Question Generation (DQG).

Question Generation Question-Generation +1

Learning Domain-Sensitive and Sentiment-Aware Word Embeddings

no code implementations ACL 2018 Bei Shi, Zihao Fu, Lidong Bing, Wai Lam

Given reviews from different domains, some existing methods for word embeddings exploit sentiment information, but they cannot produce domain-sensitive embeddings.

Data Augmentation General Classification +3

Transformation Networks for Target-Oriented Sentiment Classification

2 code implementations ACL 2018 Xin Li, Lidong Bing, Wai Lam, Bei Shi

Between the two layers, we propose a component to generate target-specific representations of words in the sentence, meanwhile incorporate a mechanism for preserving the original contextual information from the RNN layer.

Aspect-Based Sentiment Analysis (ABSA) Classification +2

Aspect Term Extraction with History Attention and Selective Transformation

1 code implementation2 May 2018 Xin Li, Lidong Bing, Piji Li, Wai Lam, Zhimou Yang

Aspect Term Extraction (ATE), a key sub-task in Aspect-Based Sentiment Analysis, aims to extract explicit aspect expressions from online user reviews.

Aspect-Based Sentiment Analysis (ABSA) Term Extraction

Actor-Critic based Training Framework for Abstractive Summarization

no code implementations28 Mar 2018 Piji Li, Lidong Bing, Wai Lam

For the critic, we combine the maximum likelihood estimator with a well designed global summary quality estimator which is a neural network based binary classifier aiming to make the generated summaries indistinguishable from the human-written ones.

Abstractive Text Summarization

Deep Recurrent Generative Decoder for Abstractive Text Summarization

1 code implementation EMNLP 2017 Piji Li, Wai Lam, Lidong Bing, ZiHao Wang

We propose a new framework for abstractive text summarization based on a sequence-to-sequence oriented encoder-decoder model equipped with a deep recurrent generative decoder (DRGN).

Abstractive Text Summarization Variational Inference

Neural Rating Regression with Abstractive Tips Generation for Recommendation

no code implementations1 Aug 2017 Piji Li, ZiHao Wang, Zhaochun Ren, Lidong Bing, Wai Lam

In essence, writing some tips and giving a numerical rating are two facets of a user's product assessment action, expressing the user experience and feelings.


Bootstrapping Distantly Supervised IE using Joint Learning and Small Well-structured Corpora

no code implementations10 Jun 2016 Lidong Bing, Bhuwan Dhingra, Kathryn Mazaitis, Jong Hyuk Park, William W. Cohen

We propose a framework to improve performance of distantly-supervised relation extraction, by jointly learning to solve two related tasks: concept-instance extraction and relation extraction.

Relation Extraction

Distant IE by Bootstrapping Using Lists and Document Structure

no code implementations4 Jan 2016 Lidong Bing, Mingyang Ling, Richard C. Wang, William W. Cohen

Distant labeling for information extraction (IE) suffers from noisy training data.

Abstractive Multi-Document Summarization via Phrase Selection and Merging

no code implementations IJCNLP 2015 Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, Rebecca J. Passonneau

We propose an abstraction-based multi-document summarization framework that can construct new sentences by exploring more fine-grained syntactic units than sentences, namely, noun/verb phrases.

Document Summarization Multi-Document Summarization

Cannot find the paper you are looking for? You can Submit a new open access paper.