The Fact Extraction and VERification Over Unstructured and Structured information (FEVEROUS) shared task, asks participating systems to determine whether human-authored claims are Supported or Refuted based on evidence retrieved from Wikipedia (or NotEnoughInfo if the claim cannot be verified).
Fact verification has attracted a lot of attention in the machine learning and natural language processing communities, as it is one of the key methods for detecting misinformation.
Claim verification is the task of predicting the veracity of written statements against evidence.
This paper introduces the task of factual error correction: performing edits to a claim so that the generated rewrite is better supported by evidence.
2 code implementations • • Fabio Petroni, Aleksandra Piktus, Angela Fan, Patrick Lewis, Majid Yazdani, Nicola De Cao, James Thorne, Yacine Jernite, Vladimir Karpukhin, Jean Maillard, Vassilis Plachouras, Tim Rocktäschel, Sebastian Riedel
We test both task-specific and general baselines, evaluating downstream performance in addition to the ability of the models to provide provenance.
Ranked #3 on Entity Linking on KILT: WNED-CWEB
The biases present in training datasets have been shown to affect models for sentence pair classification tasks such as natural language inference (NLI) and fact verification.
Automated fact verification has been progressing owing to advancements in modeling and availability of large datasets.
In this paper, we show that it is possible to generate token-level explanations for NLI without the need for training data explicitly annotated for this purpose.
This paper describes a baseline for the second iteration of the Fact Extraction and VERification shared task (FEVER2. 0) which explores the resilience of systems through adversarial evaluation.
The recently increased focus on misinformation has stimulated research in fact checking, the task of assessing the truthfulness of a claim.
Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.
In this paper we present our automated fact checking system demonstration which we developed in order to participate in the Fast and Furious Fact Check challenge.