Search Results for author: Maria Bielikova

Found 27 papers, 15 papers with code

Task Prompt Vectors: Effective Initialization through Multi-Task Soft-Prompt Transfer

no code implementations2 Aug 2024 Robert Belanec, Simon Ostermann, Ivan Srba, Maria Bielikova

In this way, we provide a competitive alternative to state-of-the-art baselines by arithmetic addition of task prompt vectors from multiple tasks.

Language Modeling Language Modelling

Fighting Randomness with Randomness: Mitigating Optimisation Instability of Fine-Tuning using Delayed Ensemble and Noisy Interpolation

1 code implementation18 Jun 2024 Branislav Pecher, Jan Cegin, Robert Belanec, Jakub Simko, Ivan Srba, Maria Bielikova

We show that: 1) DENI outperforms the best performing mitigation strategy (Ensemble), while using only a fraction of its cost; 2) the mitigation strategies are beneficial for parameter-efficient fine-tuning (PEFT) methods, outperforming full fine-tuning in specific cases; and 3) combining DENI with data augmentation often leads to even more effective instability mitigation.

Computational Efficiency Data Augmentation +3

On Sensitivity of Learning with Limited Labelled Data to the Effects of Randomness: Impact of Interactions and Systematic Choices

1 code implementation20 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova

To measure the true effects of an individual randomness factor, our method mitigates the effects of other factors and observes how the performance varies across multiple runs.

In-Context Learning Meta-Learning +2

Comparing Specialised Small and General Large Language Models on Text Classification: 100 Labelled Samples to Achieve Break-Even Performance

1 code implementation20 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova

When performance variance is taken into consideration, the number of required labels increases on average by $100 - 200\%$ and even up to $1500\%$ in specific cases.

In-Context Learning Language Modeling +4

Automatic Combination of Sample Selection Strategies for Few-Shot Learning

no code implementations5 Feb 2024 Branislav Pecher, Ivan Srba, Maria Bielikova, Joaquin Vanschoren

In few-shot learning, such as meta-learning, few-shot fine-tuning or in-context learning, the limited number of samples used to train a model have a significant impact on the overall success.

Few-Shot Learning In-Context Learning

Authorship Obfuscation in Multilingual Machine-Generated Text Detection

2 code implementations15 Jan 2024 Dominik Macko, Robert Moro, Adaku Uchendu, Ivan Srba, Jason Samuel Lucas, Michiharu Yamashita, Nafis Irtiza Tripto, Dongwon Lee, Jakub Simko, Maria Bielikova

High-quality text generation capability of recent Large Language Models (LLMs) causes concerns about their misuse (e. g., in massive generation/spread of disinformation).

Adversarial Robustness Benchmarking +3

Effects of diversity incentives on sample diversity and downstream model performance in LLM-based text augmentation

1 code implementation12 Jan 2024 Jan Cegin, Branislav Pecher, Jakub Simko, Ivan Srba, Maria Bielikova, Peter Brusilovsky

The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models.

Diversity Text Augmentation

A Survey on Stability of Learning with Limited Labelled Data and its Sensitivity to the Effects of Randomness

no code implementations2 Dec 2023 Branislav Pecher, Ivan Srba, Maria Bielikova

In this survey, we provide a comprehensive overview of 415 papers addressing the effects of randomness on the stability of learning with limited labelled data.

Few-Shot Learning In-Context Learning +2

Disinformation Capabilities of Large Language Models

1 code implementation15 Nov 2023 Ivan Vykopal, Matúš Pikuliak, Ivan Srba, Robert Moro, Dominik Macko, Maria Bielikova

Automated disinformation generation is often listed as an important risk associated with large language models (LLMs).

MULTITuDE: Large-Scale Multilingual Machine-Generated Text Detection Benchmark

1 code implementation20 Oct 2023 Dominik Macko, Robert Moro, Adaku Uchendu, Jason Samuel Lucas, Michiharu Yamashita, Matúš Pikuliak, Ivan Srba, Thai Le, Dongwon Lee, Jakub Simko, Maria Bielikova

There is a lack of research into capabilities of recent LLMs to generate convincing text in languages other than English and into performance of detectors of machine-generated text in multilingual settings.

Benchmarking de-en +1

FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

no code implementations11 Aug 2023 Karim Lekadir, Aasa Feragen, Abdul Joseph Fofanah, Alejandro F Frangi, Alena Buyx, Anais Emelie, Andrea Lara, Antonio R Porras, An-Wen Chan, Arcadi Navarro, Ben Glocker, Benard O Botwe, Bishesh Khanal, Brigit Beger, Carol C Wu, Celia Cintas, Curtis P Langlotz, Daniel Rueckert, Deogratias Mzurikwao, Dimitrios I Fotiadis, Doszhan Zhussupov, Enzo Ferrante, Erik Meijering, Eva Weicken, Fabio A González, Folkert W Asselbergs, Fred Prior, Gabriel P Krestin, Gary Collins, Geletaw S Tegenaw, Georgios Kaissis, Gianluca Misuraca, Gianna Tsakou, Girish Dwivedi, Haridimos Kondylakis, Harsha Jayakody, Henry C Woodruf, Horst Joachim Mayer, Hugo JWL Aerts, Ian Walsh, Ioanna Chouvarda, Irène Buvat, Isabell Tributsch, Islem Rekik, James Duncan, Jayashree Kalpathy-Cramer, Jihad Zahir, Jinah Park, John Mongan, Judy W Gichoya, Julia A Schnabel, Kaisar Kushibar, Katrine Riklund, Kensaku MORI, Kostas Marias, Lameck M Amugongo, Lauren A Fromont, Lena Maier-Hein, Leonor Cerdá Alberich, Leticia Rittner, Lighton Phiri, Linda Marrakchi-Kacem, Lluís Donoso-Bach, Luis Martí-Bonmatí, M Jorge Cardoso, Maciej Bobowicz, Mahsa Shabani, Manolis Tsiknakis, Maria A Zuluaga, Maria Bielikova, Marie-Christine Fritzsche, Marina Camacho, Marius George Linguraru, Markus Wenzel, Marleen de Bruijne, Martin G Tolsgaard, Marzyeh Ghassemi, Md Ashrafuzzaman, Melanie Goisauf, Mohammad Yaqub, Mónica Cano Abadía, Mukhtar M E Mahmoud, Mustafa Elattar, Nicola Rieke, Nikolaos Papanikolaou, Noussair Lazrak, Oliver Díaz, Olivier Salvado, Oriol Pujol, Ousmane Sall, Pamela Guevara, Peter Gordebeke, Philippe Lambin, Pieta Brown, Purang Abolmaesumi, Qi Dou, Qinghua Lu, Richard Osuala, Rose Nakasi, S Kevin Zhou, Sandy Napel, Sara Colantonio, Shadi Albarqouni, Smriti Joshi, Stacy Carter, Stefan Klein, Steffen E Petersen, Susanna Aussó, Suyash Awate, Tammy Riklin Raviv, Tessa Cook, Tinashe E M Mutsvangwa, Wendy A Rogers, Wiro J Niessen, Xènia Puig-Bosch, Yi Zeng, Yunusa G Mohammed, Yves Saint James Aquino, Zohaib Salahuddin, Martijn P A Starmans

This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.

Fairness

Eye Tracking as a Source of Implicit Feedback in Recommender Systems: A Preliminary Analysis

no code implementations12 May 2023 Santiago de Leon-Martinez, Robert Moro, Maria Bielikova

Eye tracking in recommender systems can provide an additional source of implicit feedback, while helping to evaluate other sources of feedback.

Collaborative Filtering Movie Recommendation +1

Searching for Discriminative Words in Multidimensional Continuous Feature Space

no code implementations26 Nov 2022 Marius Sajgalik, Michal Barla, Maria Bielikova

We demonstrate the effectiveness of our approach by achieving state-of-the-art results on text categorisation task using just a small number of extracted keywords.

Part-Of-Speech Tagging

Auditing YouTube's Recommendation Algorithm for Misinformation Filter Bubbles

1 code implementation18 Oct 2022 Ivan Srba, Robert Moro, Matus Tomlein, Branislav Pecher, Jakub Simko, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Adrian Gavornik, Maria Bielikova

We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations.

Misinformation

An Audit of Misinformation Filter Bubbles on YouTube: Bubble Bursting and Recent Behavior Changes

1 code implementation25 Mar 2022 Matus Tomlein, Branislav Pecher, Jakub Simko, Ivan Srba, Robert Moro, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Maria Bielikova

We present a study in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content (for various topics).

Misinformation

Exploring Customer Price Preference and Product Profit Role in Recommender Systems

no code implementations13 Mar 2022 Michal Kompan, Peter Gaspar, Jakub Macina, Matus Cimerman, Maria Bielikova

We propose an adjustment of a predicted ranking for score-based recommender systems and explore the effect of the profit and customers' price preferences on two industry datasets from the fashion domain.

Recommendation Systems

A Study of Fake News Reading and Annotating in Social Media Context

no code implementations26 Sep 2021 Jakub Simko, Patrik Racsko, Matus Tomlein, Martin Hanakova, Robert Moro, Maria Bielikova

In this paper, we present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake.

Misinformation

The Cold-start Problem: Minimal Users' Activity Estimation

no code implementations31 May 2021 Juraj Visnovsky, Ondrej Kassak, Michal Kompan, Maria Bielikova

Cold-start problem, which arises upon the new users arrival, is one of the fundamental problems in today's recommender approaches.

Clustering

Cannot find the paper you are looking for? You can Submit a new open access paper.