Search Results for author: Francisco Rangel

Found 10 papers, 3 papers with code

FakeFlow: Fake News Detection by Modeling the Flow of Affective Information

1 code implementation EACL 2021 Bilal Ghanem, Simone Paolo Ponzetto, Paolo Rosso, Francisco Rangel

To capture this, we propose in this paper to model the flow of affective information in fake news articles using a neural architecture.

Fake News Detection

A Low Dimensionality Representation for Language Variety Identification

1 code implementation30 May 2017 Francisco Rangel, Marc Franco-Salvador, Paolo Rosso

We compare our LDR method with common state-of-the-art representations and show an increase in accuracy of ~35%.

Overview of AuTexTification at IberLEF 2023: Detection and Attribution of Machine-Generated Text in Multiple Domains

1 code implementation20 Sep 2023 Areg Mikael Sarvazyan, José Ángel González, Marc Franco-Salvador, Francisco Rangel, Berta Chulvi, Paolo Rosso

This paper presents the overview of the AuTexTification shared task as part of the IberLEF 2023 Workshop in Iberian Languages Evaluation Forum, within the framework of the SEPLN 2023 conference.

Attribute Language Modelling +2

Cross-corpus Native Language Identification via Statistical Embedding

no code implementations WS 2018 Francisco Rangel, Paolo Rosso, Julian Brooke, Alex Uitdenbogerd, ra

In this paper, we approach the task of native language identification in a realistic cross-corpus scenario where a model is trained with available data and has to predict the native language from data of a different corpus.

Cross-corpus Native Language Identification

An Emotional Analysis of False Information in Social Media and News Articles

no code implementations26 Aug 2019 Bilal Ghanem, Paolo Rosso, Francisco Rangel

Fake news is risky since it has been created to manipulate the readers' opinions and beliefs.

Zero and Few-shot Learning for Author Profiling

no code implementations22 Apr 2022 Mara Chinea-Rios, Thomas Müller, Gretel Liz De la Peña Sarracén, Francisco Rangel, Marc Franco-Salvador

We find that entailment-based models out-perform supervised text classifiers based on roberta-XLM and that we can reach 80% of the accuracy of previous approaches using less than 50\% of the training data on average.

Few-Shot Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.