FlowQA: Grasping Flow in History for Conversational Machine Comprehension

ICLR 2019  ·  Hsin-Yuan Huang, Eunsol Choi, Wen-tau Yih ·

Conversational machine comprehension requires the understanding of the conversation history, such as previous question/answer pairs, the document context, and the current question. To enable traditional, single-turn models to encode the history comprehensively, we introduce Flow, a mechanism that can incorporate intermediate representations generated during the process of answering previous questions, through an alternating parallel processing structure. Compared to approaches that concatenate previous questions/answers as input, Flow integrates the latent semantics of the conversation history more deeply. Our model, FlowQA, shows superior performance on two recently proposed conversational challenges (+7.2% F1 on CoQA and +4.0% on QuAC). The effectiveness of Flow also shows in other tasks. By reducing sequential instruction understanding to conversational machine comprehension, FlowQA outperforms the best models on all three domains in SCONE, with +1.8% to +4.4% improvement in accuracy.

PDF Abstract ICLR 2019 PDF ICLR 2019 Abstract


Results from the Paper

Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering CoQA FlowQA (single model) In-domain 76.3 # 5
Out-of-domain 71.8 # 5
Overall 75.0 # 6
Question Answering QuAC FlowQA (single model) F1 64.1 # 1
HEQQ 59.6 # 1
HEQD 5.8 # 1


No methods listed for this paper. Add relevant methods here