Dialogue-Based Relation Extraction

ACL 2020  ·  Dian Yu, Kai Sun, Claire Cardie, Dong Yu ·

We present the first human-annotated dialogue-based relation extraction (RE) dataset DialogRE, aiming to support the prediction of relation(s) between two arguments that appear in a dialogue. We further offer DialogRE as a platform for studying cross-sentence RE as most facts span multiple sentences. We argue that speaker-related information plays a critical role in the proposed task, based on an analysis of similarities and differences between dialogue-based and traditional RE tasks. Considering the timeliness of communication in a dialogue, we design a new metric to evaluate the performance of RE methods in a conversational setting and investigate the performance of several representative RE methods on DialogRE. Experimental results demonstrate that a speaker-aware extension on the best-performing model leads to gains in both the standard and conversational evaluation settings. DialogRE is available at https://dataset.org/dialogre/.

PDF Abstract ACL 2020 PDF ACL 2020 Abstract

Datasets


Introduced in the Paper:

DialogRE

Used in the Paper:

DocRED KnowledgeNet

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Dialog Relation Extraction DialogRE BERTS F1 (v1) 61.2 # 8
F1c (v1) 55.4 # 6
Dialog Relation Extraction DialogRE BiLSTM F1 (v1) 48.6 # 10
F1c (v1) 45 # 8

Methods


No methods listed for this paper. Add relevant methods here