24 papers with code • 2 benchmarks • 7 datasets
Fact verification, also called "fact checking", is a process of verifying facts in natural text against a database of facts.
Built on that, we construct the graph attention verification networks, which are designed to fuse different sources of evidences from verbalized program execution, program structures, and the original statements and tables, to make the final verification decision.
To this end, we construct a large-scale dataset called TabFact with 16k Wikipedia tables as the evidence for 118k human-annotated natural language statements, which are labeled as either ENTAILED or REFUTED.
Ranked #2 on Table-based Fact Verification on TabFact
Fact Verification requires fine-grained natural language inference capability that finds subtle clues to identify the syntactical and semantically correct but not well-supported claims.
Ranked #1 on Fact Verification on FEVER
Fact verification (FV) is a challenging task which requires to retrieve relevant evidence from plain text and use the evidence to verify given claims.
Ranked #2 on Fact Verification on FEVER
In this work, we give general guidelines on system design for MRS by proposing a simple yet effective pipeline system with special consideration on hierarchical semantic retrieval at both paragraph and sentence level, and their potential effects on the downstream task.
Ranked #38 on Question Answering on HotpotQA
The search can directly warn fake news posters and online users (e. g. the posters' followers) about misinformation, discourage them from spreading fake news, and scale up verified content on social media.