UofA-Truth at Factify 2022 : Transformer And Transfer Learning Based Multi-Modal Fact-Checking

28 Jan 2022  ·  Abhishek Dhankar, Osmar R. Zaïane, Francois Bolduc ·

Identifying fake news is a very difficult task, especially when considering the multiple modes of conveying information through text, image, video and/or audio. We attempted to tackle the problem of automated misinformation/disinformation detection in multi-modal news sources (including text and images) through our simple, yet effective, approach in the FACTIFY shared task at De-Factify@AAAI2022. Our model produced an F1-weighted score of 74.807%, which was the fourth best out of all the submissions. In this paper we will explain our approach to undertake the shared task.

PDF Abstract

Datasets


  Add Datasets introduced or used in this paper

Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here