Fact Verification
124 papers with code • 3 benchmarks • 17 datasets
Fact verification, also called "fact checking", is a process of verifying facts in natural text against a database of facts.
Libraries
Use these libraries to find Fact Verification models and implementationsMost implemented papers
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks
Large pre-trained language models have been shown to store factual knowledge in their parameters, and achieve state-of-the-art results when fine-tuned on downstream NLP tasks.
ReAct: Synergizing Reasoning and Acting in Language Models
While large language models (LLMs) have demonstrated impressive capabilities across tasks in language understanding and interactive decision making, their abilities for reasoning (e. g. chain-of-thought prompting) and acting (e. g. action plan generation) have primarily been studied as separate topics.
Self-RAG: Learning to Retrieve, Generate, and Critique through Self-Reflection
Our framework trains a single arbitrary LM that adaptively retrieves passages on-demand, and generates and reflects on retrieved passages and its own generations using special tokens, called reflection tokens.
Towards Debiasing Fact Verification Models
Fact verification requires validating a claim in the context of evidence.
KILT: a Benchmark for Knowledge Intensive Language Tasks
We test both task-specific and general baselines, evaluating downstream performance in addition to the ability of the models to provide provenance.
Evidence-based Factual Error Correction
This paper introduces the task of factual error correction: performing edits to a claim so that the generated rewrite is better supported by evidence.
Combining Fact Extraction and Verification with Neural Semantic Matching Networks
The increasing concern with misinformation has stimulated research efforts on automatic fact checking.
GEAR: Graph-based Evidence Aggregating and Reasoning for Fact Verification
Fact verification (FV) is a challenging task which requires to retrieve relevant evidence from plain text and use the evidence to verify given claims.
End-to-End Bias Mitigation by Modelling Biases in Corpora
We experiment on large-scale natural language inference and fact verification benchmarks, evaluating on out-of-domain datasets that are specifically designed to assess the robustness of models against known biases in the training data.
Revealing the Importance of Semantic Retrieval for Machine Reading at Scale
In this work, we give general guidelines on system design for MRS by proposing a simple yet effective pipeline system with special consideration on hierarchical semantic retrieval at both paragraph and sentence level, and their potential effects on the downstream task.