Claim Verification

44 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

UKP-Athene: Multi-Sentence Textual Entailment for Claim Verification

UKPLab/fever-2018-team-athene WS 2018

The Fact Extraction and VERification (FEVER) shared task was launched to support the development of systems able to verify claims by extracting supporting or refuting facts from raw text.

Stance Prediction and Claim Verification: An Arabic Perspective

latynt/ans WS 2020

This work explores the application of textual entailment in news claim verification and stance prediction using a new corpus in Arabic.

Multi-Hop Fact Checking of Political Claims

copenlu/politihop 10 Sep 2020

We: 1) construct a small annotated dataset, PolitiHop, of evidence sentences for claim verification; 2) compare it to existing multi-hop datasets; and 3) study how to transfer knowledge from more extensive in- and out-of-domain resources to PolitiHop.

A Review on Fact Extraction and Verification

bekou/evidence_aware_nlp4if 6 Oct 2020

We study the fact checking problem, which aims to identify the veracity of a given claim.

Hierarchical Evidence Set Modeling for Automated Fact Extraction and Verification

ShyamSubramanian/HESM EMNLP 2020

Automated fact extraction and verification is a challenging task that involves finding relevant evidence sentences from a reliable corpus to verify the truthfulness of a claim.

LOREN: Logic-Regularized Reasoning for Interpretable Fact Verification

jiangjiechen/loren 25 Dec 2020

The final claim verification is based on all latent variables.

Self-Supervised Claim Identification for Automated Fact Checking

architapathak/Self-Supervised-ClaimIdentification ICON 2020

We propose a novel, attention-based self-supervised approach to identify "claim-worthy" sentences in a fake news article, an important first step in automated fact-checking.

A DQN-based Approach to Finding Precise Evidences for Fact Verification

sysulic/dqn-fv ACL 2021

Computing precise evidences, namely minimal sets of sentences that support or refute a given claim, rather than larger evidences is crucial in fact verification (FV), since larger evidences may contain conflicting pieces some of which support the claim while the other refute, thereby misleading FV.

Abstract, Rationale, Stance: A Joint Model for Scientific Claim Verification

zhiweizhang97/arsjointmodel EMNLP 2021

In addition, we enhance the information exchanges and constraints among tasks by proposing a regularization term between the sentence attention scores of abstract retrieval and the estimated outputs of rational selection.