Search Results for author: Jakub Simko

Found 11 papers, 7 papers with code

Effects of diversity incentives on sample diversity and downstream model performance in LLM-based text augmentation

1 code implementation12 Jan 2024 Jan Cegin, Branislav Pecher, Jakub Simko, Ivan Srba, Maria Bielikova, Peter Brusilovsky

The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models.

Text Augmentation

Is it indeed bigger better? The comprehensive study of claim detection LMs applied for disinformation tackling

no code implementations10 Nov 2023 Martin Hyben, Sebastian Kula, Ivan Srba, Robert Moro, Jakub Simko

This study compares the performance of (1) fine-tuned models and (2) extremely large language models on the task of check-worthy claim detection.

MULTITuDE: Large-Scale Multilingual Machine-Generated Text Detection Benchmark

1 code implementation20 Oct 2023 Dominik Macko, Robert Moro, Adaku Uchendu, Jason Samuel Lucas, Michiharu Yamashita, Matúš Pikuliak, Ivan Srba, Thai Le, Dongwon Lee, Jakub Simko, Maria Bielikova

There is a lack of research into capabilities of recent LLMs to generate convincing text in languages other than English and into performance of detectors of machine-generated text in multilingual settings.

Benchmarking Text Detection

Automated, not Automatic: Needs and Practices in European Fact-checking Organizations as a basis for Designing Human-centered AI Systems

no code implementations22 Nov 2022 Andrea Hrckova, Robert Moro, Ivan Srba, Jakub Simko, Maria Bielikova

Second, we have identified fact-checkers' needs and pains focusing on so far unexplored dimensions and emphasizing the needs of fact-checkers from Central and Eastern Europe as well as from low-resource language groups which have implications for development of new resources (datasets) as well as for the focus of AI research in this domain.

Fact Checking

Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles

1 code implementation18 Oct 2022 Ivan Srba, Robert Moro, Matus Tomlein, Branislav Pecher, Jakub Simko, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Adrian Gavornik, Maria Bielikova

We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations.

Misinformation

An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting and Recent Behavior Changes

1 code implementation25 Mar 2022 Matus Tomlein, Branislav Pecher, Jakub Simko, Ivan Srba, Robert Moro, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Maria Bielikova

We present a study in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content (for various topics).

Misinformation

A Study of Fake News Reading and Annotating in Social Media Context

no code implementations26 Sep 2021 Jakub Simko, Patrik Racsko, Matus Tomlein, Martin Hanakova, Robert Moro, Maria Bielikova

In this paper, we present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake.

Misinformation

Cannot find the paper you are looking for? You can Submit a new open access paper.