Answer Selection

11 papers with code · Natural Language Processing
Subtask of Question Answering

State-of-the-art leaderboards

No evaluation results yet. Help compare methods by submit evaluation metrics.

Greatest papers with code

LSTM-based Deep Learning Models for Non-factoid Answer Selection

12 Nov 2015deepmipt/DeepPavlov

One direction is to define a more composite representation for questions and answers by combining convolutional neural network with the basic framework. Several variations of models are provided.

ANSWER SELECTION

Neural Variational Inference for Text Processing

19 Nov 2015carpedm20/variational-text-tensorflow

We validate this framework on two very different text modelling applications, generative document modelling and supervised question answering. The neural answer selection model employs a stochastic representation layer within an attention mechanism to extract the semantics between a question and answer pair.

ANSWER SELECTION

A Compare-Aggregate Model for Matching Text Sequences

6 Nov 2016shuohangwang/SeqMatchSeq

Many NLP tasks including machine comprehension, answer selection and text entailment require the comparison between sequences. We particularly focus on the different comparison functions we can use to match two vectors.

ANSWER SELECTION READING COMPREHENSION

Gated-Attention Readers for Text Comprehension

ACL 2017 bdhingra/ga-reader

In this paper we study the problem of answering cloze-style questions over documents. Our model, the Gated-Attention (GA) Reader, integrates a multi-hop architecture with a novel attention mechanism, which is based on multiplicative interactions between the query embedding and the intermediate states of a recurrent neural network document reader.

ANSWER SELECTION OPEN-DOMAIN QUESTION ANSWERING READING COMPREHENSION

Hierarchical Memory Networks for Answer Selection on Unknown Words

COLING 2016 jacoxu/HMN4QA

Recently, end-to-end memory networks have shown promising results on Question Answering task, which encode the past facts into an explicit memory and perform reasoning ability by making multiple computational steps on the memory. However, memory networks conduct the reasoning on sentence-level memory to output coarse semantic vectors and do not further take any attention mechanism to focus on words, which may lead to the model lose some detail information, especially when the answers are rare or unknown words.

ANSWER SELECTION

Learning Recurrent Span Representations for Extractive Question Answering

4 Nov 2016shimisalant/RaSoR

However, Rajpurkar et al. (2016) recently released the SQuAD dataset in which the answers can be arbitrary strings from the supplied text. In this paper, we focus on this answer extraction task, presenting a novel model architecture that efficiently builds fixed length representations of all spans in the evidence document with a recurrent network.

ANSWER SELECTION READING COMPREHENSION

ABCNN: Attention-Based Convolutional Neural Network for Modeling Sentence Pairs

TACL 2016 Leputa/CIKM-AnalytiCup-2018

How to model a pair of sentences is a critical issue in many NLP tasks such as answer selection (AS), paraphrase identification (PI) and textual entailment (TE). (ii) We propose three attention schemes that integrate mutual influence between sentences into CNN; thus, the representation of each sentence takes into consideration its counterpart.

ANSWER SELECTION NATURAL LANGUAGE INFERENCE PARAPHRASE IDENTIFICATION

Applying Deep Learning to Answer Selection: A Study and An Open Task

7 Aug 2015lixingqiancs/Fintech-s-Reasearch-Group-ComputerScience

We apply a general deep learning framework to address the non-factoid question answering task. Our approach does not rely on any linguistic tools and can be applied to different languages or domains.

ANSWER SELECTION

Multi-Task Learning with Multi-View Attention for Answer Selection and Knowledge Base Question Answering

6 Dec 2018dengyang17/MTQA

Second, these two tasks can benefit each other: answer selection can incorporate the external knowledge from knowledge base (KB), while KBQA can be improved by learning contextual information from answer selection. To fulfill the goal of jointly learning these two tasks, we propose a novel multi-task learning scheme that utilizes multi-view attention learned from various perspectives to enable these tasks to interact with each other as well as learn more comprehensive sentence representations.

ANSWER SELECTION KNOWLEDGE BASE QUESTION ANSWERING MULTI-TASK LEARNING