Multiview Contextual Commonsense Inference: A New Dataset and Task

6 Oct 2022  ·  Siqi Shen, Deepanway Ghosal, Navonil Majumder, Henry Lim, Rada Mihalcea, Soujanya Poria ·

Contextual commonsense inference is the task of generating various types of explanations around the events in a dyadic dialogue, including cause, motivation, emotional reaction, and others. Producing a coherent and non-trivial explanation requires awareness of the dialogue's structure and of how an event is grounded in the context. In this work, we create CICEROv2, a dataset consisting of 8,351 instances from 2,379 dialogues, containing multiple human-written answers for each contextual commonsense inference question, representing a type of explanation on cause, subsequent event, motivation, and emotional reaction. We show that the inferences in CICEROv2 are more semantically diverse than other contextual commonsense inference datasets. To solve the inference task, we propose a collection of pre-training objectives, including concept denoising and utterance sorting to prepare a pre-trained model for the downstream contextual commonsense inference task. Our results show that the proposed pre-training objectives are effective at adapting the pre-trained T5-Large model for the contextual commonsense inference task.

PDF Abstract

Datasets


Introduced in the Paper:

CICEROv2

Used in the Paper:

DailyDialog DREAM MuTual CICERO

Results from the Paper


 Ranked #1 on Multiview Contextual Commonsense Inference on CICERO (using extra training data)

     Get a GitHub badge
Task Dataset Model Metric Name Metric Value Global Rank Uses Extra
Training Data
Result Benchmark
Multiview Contextual Commonsense Inference CICERO DIALECT Accuracy 27.54 # 1
Multiview Contextual Commonsense Inference CICERO T5-large Accuracy 25.66 # 2
Multiview Contextual Commonsense Inference CICEROv2 DIALECT Accuracy 73.80 # 1
Multiview Contextual Commonsense Inference CICEROv2 T5-large Accuracy 71.95 # 2

Methods


No methods listed for this paper. Add relevant methods here