Learning to Embed Multi-Modal Contexts for Situated Conversational Agents

The Situated Interactive Multi-Modal Conversations (SIMMC) 2.0 aims to create virtual shopping assistants that can accept complex multi-modal inputs, i.e. visual appearances of objects and user utterances. It consists of four subtasks, multi-modal disambiguation (MM-Disamb), multi-modal coreference resolution (MM-Coref), multi-modal dialog state tracking (MM-DST), and response retrieval and generation. While many task-oriented dialog systems usually tackle each subtask separately, we propose a jointly learned multi-modal encoder-decoder that incorporates visual inputs and performs all four subtasks at once for efficiency. This approach won the MM-Coref and response retrieval subtasks and nominated runner-up for the remaining subtasks using a single unified model at the 10th Dialog Systems Technology Challenge (DSTC10), setting a high bar for the novel task of multi-modal task-oriented dialog systems.

PDF Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Benchmark
Response Generation SIMMC2.0 BART-large BLEU 33.1 # 2
Dialogue State Tracking SIMMC2.0 BART-base Slot F1 82.0 # 3
Act F1 95.2 # 3
Dialogue State Tracking SIMMC2.0 BART-large Slot F1 88.3 # 1
Act F1 96.3 # 2

Methods


No methods listed for this paper. Add relevant methods here