Online users today are exposed to misleading and propagandistic news articles and media posts on a daily basis.
We present a text representation approach that can combine different views (representations) of the same input through effective data fusion and attention strategies for ranking purposes.
We present a multi-task learning model that leverages large amount of textual information from existing datasets to improve stance prediction.
In particular, we introduce a novel contrastive language adaptation approach applied to memory networks, which ensures accurate alignment of stances in the source and target languages, and can effectively deal with the challenge of limited labeled data in the target language.
We study the problem of automatic fact-checking, paying special attention to the impact of contextual and discourse information.
We present FAKTA which is a unified framework that integrates various components of a fact checking process: document retrieval from media sources with various types of reliability, stance detection of documents with respect to given claims, evidence extraction, and linguistic analysis.
We present Vector of Locally Aggregated Embeddings (VLAE) for effective and, ultimately, lossless representation of textual content.
For subtask A, all systems improved over the majority class baseline.
In this paper, we describe our submission to SemEval-2019 Task 4 on Hyperpartisan News Detection.
This paper studies the problem of stance detection which aims to predict the perspective (or stance) of a given document with respect to a given claim.
A reasonable approach for fact checking a claim involves retrieving potentially relevant documents from different sources (e. g., news websites, social media, etc.
We present a novel end-to-end memory network for stance detection, which jointly (i) predicts whether a document agrees, disagrees, discusses or is unrelated with respect to a given target claim, and also (ii) extracts snippets of evidence for that prediction.
Ranked #5 on Fake News Detection on FNC-1
Community Question Answering (cQA) forums are very popular nowadays, as they represent effective means for communities around particular topics to share information.
In real-world data, e. g., from Web forums, text is often contaminated with redundant or irrelevant content, which leads to introducing noise in machine learning algorithms.