no code implementations • 28 Oct 2024 • Ivan Srba, Olesya Razuvayevskaya, João A. Leite, Robert Moro, Ipek Baris Schlicht, Sara Tonelli, Francisco Moreno García, Santiago Barrio Lottmann, Denis Teyssou, Valentin Porcellini, Carolina Scarton, Kalina Bontcheva, Maria Bielikova
In the current era of social media and generative AI, an ability to automatically assess the credibility of online social media content is of tremendous importance.
1 code implementation • 14 Oct 2024 • Jan Cegin, Branislav Pecher, Jakub Simko, Ivan Srba, Maria Bielikova, Peter Brusilovsky
We evaluate this on in-distribution and out-of-distribution classifier performance.
no code implementations • 2 Aug 2024 • Robert Belanec, Simon Ostermann, Ivan Srba, Maria Bielikova
In this way, we provide a competitive alternative to state-of-the-art baselines by arithmetic addition of task prompt vectors from multiple tasks.
1 code implementation • 18 Jun 2024 • Branislav Pecher, Jan Cegin, Robert Belanec, Jakub Simko, Ivan Srba, Maria Bielikova
We show that: 1) DENI outperforms the best performing mitigation strategy (Ensemble), while using only a fraction of its cost; 2) the mitigation strategies are beneficial for parameter-efficient fine-tuning (PEFT) methods, outperforming full fine-tuning in specific cases; and 3) combining DENI with data augmentation often leads to even more effective instability mitigation.
1 code implementation • 20 Feb 2024 • Branislav Pecher, Ivan Srba, Maria Bielikova
To measure the true effects of an individual randomness factor, our method mitigates the effects of other factors and observes how the performance varies across multiple runs.
1 code implementation • 20 Feb 2024 • Branislav Pecher, Ivan Srba, Maria Bielikova
When performance variance is taken into consideration, the number of required labels increases on average by $100 - 200\%$ and even up to $1500\%$ in specific cases.
no code implementations • 5 Feb 2024 • Branislav Pecher, Ivan Srba, Maria Bielikova, Joaquin Vanschoren
In few-shot learning, such as meta-learning, few-shot fine-tuning or in-context learning, the limited number of samples used to train a model have a significant impact on the overall success.
2 code implementations • 15 Jan 2024 • Dominik Macko, Robert Moro, Adaku Uchendu, Ivan Srba, Jason Samuel Lucas, Michiharu Yamashita, Nafis Irtiza Tripto, Dongwon Lee, Jakub Simko, Maria Bielikova
High-quality text generation capability of recent Large Language Models (LLMs) causes concerns about their misuse (e. g., in massive generation/spread of disinformation).
1 code implementation • 12 Jan 2024 • Jan Cegin, Branislav Pecher, Jakub Simko, Ivan Srba, Maria Bielikova, Peter Brusilovsky
The latest generative large language models (LLMs) have found their application in data augmentation tasks, where small numbers of text samples are LLM-paraphrased and then used to fine-tune downstream models.
no code implementations • 2 Dec 2023 • Branislav Pecher, Ivan Srba, Maria Bielikova
In this survey, we provide a comprehensive overview of 415 papers addressing the effects of randomness on the stability of learning with limited labelled data.
1 code implementation • 15 Nov 2023 • Ivan Vykopal, Matúš Pikuliak, Ivan Srba, Robert Moro, Dominik Macko, Maria Bielikova
Automated disinformation generation is often listed as an important risk associated with large language models (LLMs).
1 code implementation • 20 Oct 2023 • Dominik Macko, Robert Moro, Adaku Uchendu, Jason Samuel Lucas, Michiharu Yamashita, Matúš Pikuliak, Ivan Srba, Thai Le, Dongwon Lee, Jakub Simko, Maria Bielikova
There is a lack of research into capabilities of recent LLMs to generate convincing text in languages other than English and into performance of detectors of machine-generated text in multilingual settings.
no code implementations • 11 Aug 2023 • Karim Lekadir, Aasa Feragen, Abdul Joseph Fofanah, Alejandro F Frangi, Alena Buyx, Anais Emelie, Andrea Lara, Antonio R Porras, An-Wen Chan, Arcadi Navarro, Ben Glocker, Benard O Botwe, Bishesh Khanal, Brigit Beger, Carol C Wu, Celia Cintas, Curtis P Langlotz, Daniel Rueckert, Deogratias Mzurikwao, Dimitrios I Fotiadis, Doszhan Zhussupov, Enzo Ferrante, Erik Meijering, Eva Weicken, Fabio A González, Folkert W Asselbergs, Fred Prior, Gabriel P Krestin, Gary Collins, Geletaw S Tegenaw, Georgios Kaissis, Gianluca Misuraca, Gianna Tsakou, Girish Dwivedi, Haridimos Kondylakis, Harsha Jayakody, Henry C Woodruf, Horst Joachim Mayer, Hugo JWL Aerts, Ian Walsh, Ioanna Chouvarda, Irène Buvat, Isabell Tributsch, Islem Rekik, James Duncan, Jayashree Kalpathy-Cramer, Jihad Zahir, Jinah Park, John Mongan, Judy W Gichoya, Julia A Schnabel, Kaisar Kushibar, Katrine Riklund, Kensaku MORI, Kostas Marias, Lameck M Amugongo, Lauren A Fromont, Lena Maier-Hein, Leonor Cerdá Alberich, Leticia Rittner, Lighton Phiri, Linda Marrakchi-Kacem, Lluís Donoso-Bach, Luis Martí-Bonmatí, M Jorge Cardoso, Maciej Bobowicz, Mahsa Shabani, Manolis Tsiknakis, Maria A Zuluaga, Maria Bielikova, Marie-Christine Fritzsche, Marina Camacho, Marius George Linguraru, Markus Wenzel, Marleen de Bruijne, Martin G Tolsgaard, Marzyeh Ghassemi, Md Ashrafuzzaman, Melanie Goisauf, Mohammad Yaqub, Mónica Cano Abadía, Mukhtar M E Mahmoud, Mustafa Elattar, Nicola Rieke, Nikolaos Papanikolaou, Noussair Lazrak, Oliver Díaz, Olivier Salvado, Oriol Pujol, Ousmane Sall, Pamela Guevara, Peter Gordebeke, Philippe Lambin, Pieta Brown, Purang Abolmaesumi, Qi Dou, Qinghua Lu, Richard Osuala, Rose Nakasi, S Kevin Zhou, Sandy Napel, Sara Colantonio, Shadi Albarqouni, Smriti Joshi, Stacy Carter, Stefan Klein, Steffen E Petersen, Susanna Aussó, Suyash Awate, Tammy Riklin Raviv, Tessa Cook, Tinashe E M Mutsvangwa, Wendy A Rogers, Wiro J Niessen, Xènia Puig-Bosch, Yi Zeng, Yunusa G Mohammed, Yves Saint James Aquino, Zohaib Salahuddin, Martijn P A Starmans
This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare.
1 code implementation • 13 May 2023 • Matúš Pikuliak, Ivan Srba, Robert Moro, Timo Hromadka, Timotej Smolen, Martin Melisek, Ivan Vykopal, Jakub Simko, Juraj Podrouzek, Maria Bielikova
Fact-checkers are often hampered by the sheer amount of online content that needs to be fact-checked.
no code implementations • 12 May 2023 • Santiago de Leon-Martinez, Robert Moro, Maria Bielikova
Eye tracking in recommender systems can provide an additional source of implicit feedback, while helping to evaluate other sources of feedback.
no code implementations • 26 Nov 2022 • Marius Sajgalik, Michal Barla, Maria Bielikova
We demonstrate the effectiveness of our approach by achieving state-of-the-art results on text categorisation task using just a small number of extracted keywords.
no code implementations • 22 Nov 2022 • Andrea Hrckova, Robert Moro, Ivan Srba, Jakub Simko, Maria Bielikova
In addition, we mapped our findings on the fact-checkers' activities and needs to the relevant tasks for AI research.
1 code implementation • 18 Oct 2022 • Ivan Srba, Robert Moro, Matus Tomlein, Branislav Pecher, Jakub Simko, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Adrian Gavornik, Maria Bielikova
We also observe a sudden decrease of misinformation filter bubble effect when misinformation debunking videos are watched after misinformation promoting videos, suggesting a strong contextuality of recommendations.
1 code implementation • 26 Apr 2022 • Ivan Srba, Branislav Pecher, Matus Tomlein, Robert Moro, Elena Stefancova, Jakub Simko, Maria Bielikova
It also contains 573 manually and more than 51k automatically labelled mappings between claims and articles.
1 code implementation • 25 Mar 2022 • Matus Tomlein, Branislav Pecher, Jakub Simko, Ivan Srba, Robert Moro, Elena Stefancova, Michal Kompan, Andrea Hrckova, Juraj Podrouzek, Maria Bielikova
We present a study in which pre-programmed agents (acting as YouTube users) delve into misinformation filter bubbles by watching misinformation promoting content (for various topics).
no code implementations • 13 Mar 2022 • Michal Kompan, Peter Gaspar, Jakub Macina, Matus Cimerman, Maria Bielikova
We propose an adjustment of a predicted ranking for score-based recommender systems and explore the effect of the profit and customers' price preferences on two industry datasets from the fashion domain.
no code implementations • 26 Sep 2021 • Jakub Simko, Patrik Racsko, Matus Tomlein, Martin Hanakova, Robert Moro, Maria Bielikova
In this paper, we present an eye-tracking study, in which we let 44 lay participants to casually read through a social media feed containing posts with news articles, some of which were fake.
no code implementations • 31 May 2021 • Juraj Visnovsky, Ondrej Kassak, Michal Kompan, Maria Bielikova
Cold-start problem, which arises upon the new users arrival, is one of the fundamental problems in today's recommender approaches.
no code implementations • 16 Dec 2020 • Miroslav Rac, Michal Kompan, Maria Bielikova
One of the most critical problems in e-commerce domain is the information overload problem.
1 code implementation • WS 2019 • Samuel Pecar, Marian Simko, Maria Bielikova
We performed experiments utilizing different methods of model ensemble.
1 code implementation • SEMEVAL 2019 • Samuel Pecar, Marian Simko, Maria Bielikova
We participated in both subtasks for domain specific and also cross-domain suggestion mining.
1 code implementation • WS 2018 • Samuel Pecar, Michal Farkas, Marian Simko, Peter Lacko, Maria Bielikova
In some cases, we proposed to remove some parts of the text, as they do not affect emotion of the original sentence.