DCN+: Mixed Objective and Deep Residual Coattention for Question Answering

ICLR 2018  ·  Caiming Xiong, Victor Zhong, Richard Socher ·

Traditional models for question answering optimize using cross entropy loss, which encourages exact answers at the cost of penalizing nearby or overlapping answers that are sometimes equally accurate. We propose a mixed objective that combines cross entropy loss with self-critical policy learning. The objective uses rewards derived from word overlap to solve the misalignment between evaluation metric and optimization objective. In addition to the mixed objective, we improve dynamic coattention networks (DCN) with a deep residual coattention encoder that is inspired by recent work in deep self-attention and residual networks. Our proposals improve model performance across question types and input lengths, especially for long questions that requires the ability to capture long-term dependencies. On the Stanford Question Answering Dataset, our model achieves state-of-the-art results with 75.1% exact match accuracy and 83.1% F1, while the ensemble obtains 78.9% exact match accuracy and 86.0% F1.

PDF Abstract ICLR 2018 PDF ICLR 2018 Abstract

Datasets


Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Question Answering SQuAD1.1 DCN+ (ensemble) EM 78.852 # 81
F1 85.996 # 86
Question Answering SQuAD1.1 DCN+ (single model) EM 74.866 # 125
F1 82.806 # 127
Question Answering SQuAD1.1 dev DCN+ (single) EM 74.5 # 28
F1 83.1 # 32

Methods


No methods listed for this paper. Add relevant methods here