DiaASQ : A Benchmark of Conversational Aspect-based Sentiment Quadruple Analysis

10 Nov 2022  ยท  Bobo Li, Hao Fei, Fei Li, Yuhan Wu, Jinsong Zhang, Shengqiong Wu, Jingye Li, Yijiang Liu, Lizi Liao, Tat-Seng Chua, Donghong Ji ยท

The rapid development of aspect-based sentiment analysis (ABSA) within recent decades shows great potential for real-world society. The current ABSA works, however, are mostly limited to the scenario of a single text piece, leaving the study in dialogue contexts unexplored. To bridge the gap between fine-grained sentiment analysis and conversational opinion mining, in this work, we introduce a novel task of conversational aspect-based sentiment quadruple analysis, namely DiaASQ, aiming to detect the quadruple of target-aspect-opinion-sentiment in a dialogue. We manually construct a large-scale high-quality DiaASQ dataset in both Chinese and English languages. We deliberately develop a neural model to benchmark the task, which advances in effectively performing end-to-end quadruple prediction, and manages to incorporate rich dialogue-specific and discourse feature representations for better cross-utterance quadruple extraction. We hope the new benchmark will spur more advancements in the sentiment analysis community.

PDF Abstract

Datasets


Introduced in the Paper:

DiaASQ

Used in the Paper:

ASTE MAMS
Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Conversational Sentiment Quadruple Extraction DiaASQ (EN) E2E-DiaASQ Span F1 (target) 88.62 # 1
Span F1 (aspect) 74.71 # 1
Span F1 (opinion) 60.22 # 1
Pair F1 (target-aspect) 47.91 # 1
Pair F1 (target-opinion) 45.58 # 1
Pair F1 (aspect-opinion) 44.27 # 1
Quad F1 (micro) 33.31 # 1
Quad F1 (identification) 36.80 # 1
Conversational Sentiment Quadruple Extraction DiaASQ (ZH) E2E-DiaASQ Span F1 (target) 90.23 # 1
Span F1 (aspect) 76.94 # 1
Span F1 (opinion) 59.35 # 1
Pair F1 (target-aspect) 48.61 # 1
Pair F1 (target-opinion) 43.31 # 1
Pair F1 (aspect-opinion) 45.44 # 1
Quad F1 (micro) 34.94 # 1
Quad F1 (identification) 37.51 # 1

Methods


No methods listed for this paper. Add relevant methods here