Evidence Selection

9 papers with code • 1 benchmarks • 1 datasets

This task has no description! Would you like to contribute one?


Most implemented papers

AmbiFC: Fact-Checking Ambiguous Claims with Evidence

CambridgeNLIP/verification-real-world-info-needs 1 Apr 2021

Automated fact-checking systems verify claims against evidence to predict their veracity.

Chain-of-Discussion: A Multi-Model Framework for Complex Evidence-Based Question Answering

kobayashikanna01/chain-of-discussion 26 Feb 2024

Open-ended question answering requires models to find appropriate evidence to form well-reasoned, comprehensive and helpful answers.

MeLU: Meta-Learned User Preference Estimator for Cold-Start Recommendation

hoyeoplee/MeLU 31 Jul 2019

This paper proposes a recommender system to alleviate the cold-start problem that can estimate user preferences based on only a small number of items.

Unsupervised Alignment-based Iterative Evidence Retrieval for Multi-hop Question Answering

vikas95/AIR-retriever ACL 2020

Evidence retrieval is a critical stage of question answering (QA), necessary not only to improve performance, but also to explain the decisions of the corresponding QA method.

A Dataset of Information-Seeking Questions and Answers Anchored in Research Papers

allenai/qasper-led-baseline NAACL 2021

Readers of academic research papers often read with the goal of answering specific questions.

Capturing Global Structural Information in Long Document Question Answering with Compressive Graph Selector Network

jerrrynie/cgsn 11 Oct 2022

The proposed model mainly focuses on the evidence selection phase of long document question answering.

SlideVQA: A Dataset for Document Visual Question Answering on Multiple Images

nttmdlab-nlp/slidevqa 12 Jan 2023

Visual question answering on document images that contain textual, visual, and layout information, called document VQA, has received much attention recently.

Are Large Language Models Really Good Logical Reasoners? A Comprehensive Evaluation and Beyond

deepreasoning/neulr 16 Jun 2023

Firstly, to offer systematic evaluations, we select fifteen typical logical reasoning datasets and organize them into deductive, inductive, abductive and mixed-form reasoning settings.