Answer Selection
47 papers with code • 6 benchmarks • 10 datasets
Answer Selection is the task of identifying the correct answer to a question from a pool of candidate answers. This task can be formulated as a classification or a ranking problem.
Source: Learning Analogy-Preserving Sentence Embeddings for Answer Selection
Latest papers
Ada-LEval: Evaluating long-context LLMs with length-adaptable benchmarks
Recently, the large language model (LLM) community has shown increasing interest in enhancing LLMs' capability to handle extremely long documents.
HGOT: Hierarchical Graph of Thoughts for Retrieval-Augmented In-Context Learning in Factuality Evaluation
With the widespread adoption of large language models (LLMs) in numerous applications, the challenge of factuality and the propensity for hallucinations raises significant concerns.
Solving Math Word Problem with Problem Type Classification
Firstly, We propose a problem type classifier that combines the strengths of the tree-based solver and the LLM solver.
Abstracting Concept-Changing Rules for Solving Raven's Progressive Matrix Problems
Finally, we conduct experiments to illustrate the interpretability of CRAB in concept learning, answer selection, and global rule abstraction.
Realistic Conversational Question Answering with Answer Selection based on Calibrated Confidence and Uncertainty Measurement
Conversational Question Answering (ConvQA) models aim at answering a question with its relevant paragraph and previous question-answer pairs that occurred during conversation multiple times.
Leveraging Large Language Models for Multiple Choice Question Answering
A more natural prompting approach is to present the question and answer options to the LLM jointly and have it output the symbol (e. g., "A") associated with its chosen answer option.
Once is Enough: A Light-Weight Cross-Attention for Fast Sentence Pair Modeling
Transformer-based models have achieved great success on sentence pair modeling tasks, such as answer selection and natural language inference (NLI).
Paragraph-based Transformer Pre-training for Multi-Sentence Inference
Our evaluation on three AS2 and one fact verification datasets demonstrates the superiority of our pre-training technique over the traditional ones for transformers used as joint models for multi-candidate inference tasks, as well as when used as cross-encoders for sentence-pair formulations of these tasks.
Solution of DeBERTaV3 on CommonsenseQA
We report the performance of DeBERTaV3 on CommonsenseQA in this report.
CICERO: A Dataset for Contextualized Commonsense Inference in Dialogues
This paper addresses the problem of dialogue reasoning with contextualized commonsense inference.