Answer Selection
47 papers with code • 6 benchmarks • 10 datasets
Answer Selection is the task of identifying the correct answer to a question from a pool of candidate answers. This task can be formulated as a classification or a ranking problem.
Source: Learning Analogy-Preserving Sentence Embeddings for Answer Selection
Latest papers with no code
Improving Question Answering Model Robustness with Synthetic Adversarial Data Generation
We further conduct a novel human-in-the-loop evaluation to show that our models are considerably more robust to new human-written adversarial examples: crowdworkers can fool our model only 8. 8% of the time on average, compared to 17. 6% for a model trained without synthetic data.
ASBERT: Siamese and Triplet network embedding for open question answering
Answer selection (AS) is an essential subtask in the field of natural language processing with an objective to identify the most likely answer to a given question from a corpus containing candidate answer sentences.
Contextualized Knowledge-aware Attentive Neural Network: Enhancing Answer Selection with Knowledge
Answer selection, which is involved in many natural language processing applications such as dialog systems and question answering (QA), is an important yet challenging task in practice, since conventional methods typically suffer from the issues of ignoring diverse real-world background knowledge.
Keep Learning: Self-supervised Meta-learning for Learning from Inference
A common approach in many machine learning algorithms involves self-supervised learning on large unlabeled data before fine-tuning on downstream tasks to further improve performance.
Hierarchical Ranking for Answer Selection
Answer selection is a task to choose the positive answers from a pool of candidate answers for a given question.
Exploiting WordNet Synset and Hypernym Representations for Answer Selection
Answer selection (AS) is an important subtask of document-based question answering (DQA).
ExplanationLP: Abductive Reasoning for Explainable Science Question Answering
We propose a novel approach for answering and explaining multiple-choice science questions by reasoning on grounding and abstract inference chains.
MS-Ranker: Accumulating Evidence from Potentially Correct Candidates for Answer Selection
As conventional answer selection (AS) methods generally match the question with each candidate answer independently, they suffer from the lack of matching information between the question and the candidate.
Beyond [CLS] through Ranking by Generation
Generative models for Information Retrieval, where ranking of documents is viewed as the task of generating a query from a document's language model, were very successful in various IR tasks in the past.
Knowledgeable Dialogue Reading Comprehension on Key Turns
In this paper, the relevance of each turn to the question are calculated to choose key turns.