29 papers with code • 1 benchmarks • 1 datasets
Thus we believe that FEVER is a challenging testbed that will help stimulate progress on claim verification against textual sources.
The first four tasks compose the full pipeline of claim verification in social media: Task 1 on check-worthiness estimation, Task 2 on retrieving previously fact-checked claims, Task 3 on evidence retrieval, and Task 4 on claim verification.
Motivated by the promising performance of pre-trained language models, we investigate BERT in an evidence retrieval and claim verification pipeline for the FEVER fact extraction and verification challenge.
Multi-hop reasoning (i. e., reasoning across two or more documents) is a key ingredient for NLP models that leverage large corpora to exhibit broad knowledge.
Our approach outperforms two competitive baselines on three scientific claim verification datasets, with particularly strong performance in zero / few-shot domain adaptation experiments.
We hypothesize that this is because the attention in CNNs has been mainly implemented as attentive pooling (i. e., it is applied to pooling) rather than as attentive convolution (i. e., it is integrated into convolution).
We develop TwoWingOS (two-wing optimization strategy), a system that, while identifying appropriate evidence for a claim, also determines whether or not the claim is supported by the evidence.
The Fact Extraction and VERification (FEVER) shared task was launched to support the development of systems able to verify claims by extracting supporting or refuting facts from raw text.