One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework.
We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering.
SOTA for Question Answering on QASent
Recently, end-to-end memory networks have shown promising results on Question Answering task, which encode the past facts into an explicit memory and perform reasoning ability by making multiple computational steps on the memory.
(ii) We propose three attention schemes that integrate mutual influence between sentences into CNN; thus, the representation of each sentence takes into consideration its counterpart.
In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network.
#105 best model for Question Answering on SQuAD1.1
Second, these two tasks can benefit each other: answer selection can incorporate the external knowledge from knowledge base (KB), while KBQA can be improved by learning contextual information from answer selection.