Claim Verification
44 papers with code • 1 benchmarks • 2 datasets
Most implemented papers
UKP-Athene: Multi-Sentence Textual Entailment for Claim Verification
The Fact Extraction and VERification (FEVER) shared task was launched to support the development of systems able to verify claims by extracting supporting or refuting facts from raw text.
Stance Prediction and Claim Verification: An Arabic Perspective
This work explores the application of textual entailment in news claim verification and stance prediction using a new corpus in Arabic.
Multi-Hop Fact Checking of Political Claims
We: 1) construct a small annotated dataset, PolitiHop, of evidence sentences for claim verification; 2) compare it to existing multi-hop datasets; and 3) study how to transfer knowledge from more extensive in- and out-of-domain resources to PolitiHop.
A Review on Fact Extraction and Verification
We study the fact checking problem, which aims to identify the veracity of a given claim.
Hierarchical Evidence Set Modeling for Automated Fact Extraction and Verification
Automated fact extraction and verification is a challenging task that involves finding relevant evidence sentences from a reliable corpus to verify the truthfulness of a claim.
LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification
The final claim verification is based on all latent variables.
Self-Supervised Claim Identification for Automated Fact Checking
We propose a novel, attention-based self-supervised approach to identify "claim-worthy" sentences in a fake news article, an important first step in automated fact-checking.
QMUL-SDS at SCIVER: Step-by-Step Binary Classification for Scientific Claim Verification
As a result, our team is the No.
A DQN-based Approach to Finding Precise Evidences for Fact Verification
Computing precise evidences, namely minimal sets of sentences that support or refute a given claim, rather than larger evidences is crucial in fact verification (FV), since larger evidences may contain conflicting pieces some of which support the claim while the other refute, thereby misleading FV.
Abstract, Rationale, Stance: A Joint Model for Scientific Claim Verification
In addition, we enhance the information exchanges and constraints among tasks by proposing a regularization term between the sentence attention scores of abstract retrieval and the estimated outputs of rational selection.