no code implementations • COLING (WNUT) 2022 • Ajay Hemanth Sampath Kumar, Aminath Shausan, Gianluca Demartini, Afshin Rahimi
Monitoring vaccine behaviour through social media can guide health policy.
no code implementations • 27 May 2025 • Tianwa Chen, Barbara Weber, Graeme Shanks, Gianluca Demartini, Marta Indulska, Shazia Sadiq
By studying expert process workers engaged in tasks based on integrated modeling of business processes and rules, we provide insights that pave the way for a better understanding of sensemaking practices and improved development of business process and business rule integration approaches.
no code implementations • 22 Apr 2025 • Elyas Meguellati, Assaad Zeghina, Shazia Sadiq, Gianluca Demartini
Recent advances in large language models (LLMs) have demonstrated strong performance on simple text classification tasks, frequently under zero-shot settings.
no code implementations • 18 Mar 2025 • Elyas Meguellati, Stefano Civelli, Pietro Bernardelle, Shazia Sadiq, Gianluca Demartini
In the realm of political advertising, persuasion operates as a pivotal element within the broader framework of propaganda, exerting profound influences on public opinion and electoral outcomes.
no code implementations • 7 Mar 2025 • Linh Le, Guido Zuccon, Gianluca Demartini, Genghong Zhao, Xia Zhang
Previous work on clinical relation extraction from free-text sentences leveraged information about semantic types from clinical knowledge bases as a part of entity representations.
no code implementations • 24 Feb 2025 • Elyas Meguellati, Nardiena Pratama, Shazia Sadiq, Gianluca Demartini
High-quality textual training data is essential for the success of multimodal data processing tasks, yet outputs from image captioning models like BLIP and GIT often contain errors and anomalies that are difficult to rectify using rule-based methods.
1 code implementation • 3 Feb 2025 • Gaole He, Gianluca Demartini, Ujwal Gadiraju
Our findings demonstrate that LLM agents can be a double-edged sword -- (1) they can work well when a high-quality plan and necessary user involvement in execution are available, and (2) users can easily mistrust the LLM agents with plans that seem plausible.
no code implementations • 1 Feb 2025 • Stefano Civelli, Pietro Bernardelle, Gianluca Demartini
While pretraining language models with politically diverse content has been shown to improve downstream task fairness, such approaches require significant computational resources often inaccessible to many researchers and organizations.
no code implementations • 22 Jan 2025 • Yahya Yunus, Tianwa Chen, Gianluca Demartini
Wikipedia editors can make use of WGD to locate areas with marginalized genders in Wikipedia, and increase their efforts to produce more content providing coverage for those genders to achieve better gender equality in Wikipedia.
no code implementations • 21 Dec 2024 • Leon Fröhling, Pietro Bernardelle, Gianluca Demartini
As increasingly capable large language models (LLMs) emerge, researchers have begun exploring their potential for subjective tasks.
no code implementations • 19 Dec 2024 • Pietro Bernardelle, Leon Fröhling, Stefano Civelli, Riccardo Lunardi, Kevin Roitero, Gianluca Demartini
The analysis of political biases in large language models (LLMs) has primarily examined these systems as single entities with fixed viewpoints.
no code implementations • 28 Nov 2024 • Nardiena A. Pratama, Shaoyang Fan, Gianluca Demartini
Human-annotated content is often used to train machine learning (ML) models.
no code implementations • 25 Oct 2024 • Farid Ariai, Gianluca Demartini
Natural Language Processing (NLP) is revolutionising the way legal professionals and laypersons operate in the legal field.
no code implementations • 22 Oct 2024 • Pietro Bernardelle, Gianluca Demartini
Aligning the output of Large Language Models (LLMs) with human preferences (e. g., by means of reinforcement learning with human feedback, or RLHF) is essential for ensuring their effectiveness in real-world scenarios.
1 code implementation • 15 Oct 2024 • Leon Fröhling, Gianluca Demartini, Dennis Assenmacher
We present a novel approach for enhancing diversity and control in data annotation tasks by personalizing large language models (LLMs).
1 code implementation • 27 Sep 2024 • Hongliang Ni, Lei Han, Tong Chen, Shazia Sadiq, Gianluca Demartini
While model fairness improvement has been explored previously, existing methods invariably rely on adjusting explicit sensitive attribute values in order to improve model fairness in downstream tasks.
1 code implementation • 28 May 2024 • Tong Chen, Danny Wang, Xurong Liang, Marten Risius, Gianluca Demartini, Hongzhi Yin
To counter the side effect brought by the proliferation of social media platforms, hate speech detection (HSD) plays a vital role in halting the dissemination of toxic online posts at an early stage.
no code implementations • 17 Jan 2024 • Hrishikesh Patel, Tianwa Chen, Ivano Bongiovanni, Gianluca Demartini
Gender imbalance in Wikipedia content is a known challenge which the editor community is actively addressing.
no code implementations • 2 Jan 2024 • Catherine Sai, Shazia Sadiq, Lei Han, Gianluca Demartini, Stefanie Rinderle-Ma
Organizations face the challenge of ensuring compliance with an increasing amount of requirements from various regulatory documents.
no code implementations • 15 May 2023 • Gianluca Demartini, Kevin Roitero, Stefano Mizzaro
Due to the widespread use of data-powered systems in our everyday lives, concepts like bias and fairness gained significant attention among researchers and practitioners, in both industry and academia.
no code implementations • 2 May 2023 • Aki Barry, Lei Han, Gianluca Demartini
By adding noise to the original datasets, we can explore the relationship between the quality of the training data and the fairness of the output of the models trained on that data.
1 code implementation • 13 Apr 2023 • Guglielmo Faggioli, Laura Dietz, Charles Clarke, Gianluca Demartini, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Benno Stein, Henning Wachsmuth
When asked, large language models (LLMs) like ChatGPT claim that they can assist with relevance judgments but it is not clear whether automated judgments can reliably be used in evaluations of retrieval systems.
1 code implementation • 19 Aug 2022 • Mohammed Saeed, Nicolas Traub, Maelle Nicolas, Gianluca Demartini, Paolo Papotti
Fact-checking is one of the effective solutions in fighting online misinformation.
no code implementations • 26 Oct 2021 • Gianluca Demartini, Kevin Roitero, Stefano Mizzaro
Due to the widespread use of data-powered systems in our everyday lives, the notions of bias and fairness gained significant attention among researchers and practitioners, in both industry and academia.
1 code implementation • 3 Aug 2021 • Michael Soprano, Kevin Roitero, David La Barbera, Davide Ceolin, Damiano Spina, Stefano Mizzaro, Gianluca Demartini
We deploy a set of quality control mechanisms to ensure that the thousands of assessments collected on 180 publicly available fact-checked statements distributed over two datasets are of adequate quality, including a custom search engine used by the crowd workers to find web pages supporting their truthfulness assessments.
1 code implementation • 25 Jul 2021 • Kevin Roitero, Michael Soprano, Beatrice Portelli, Massimiliano De Luise, Damiano Spina, Vincenzo Della Mea, Giuseppe Serra, Stefano Mizzaro, Gianluca Demartini
Our results show that: workers are able to detect and objectively categorize online (mis)information related to COVID-19; both crowdsourced and expert judgments can be transformed and aggregated to improve quality; worker background and other signals (e. g., source of information, behavior) impact the quality of the data.
1 code implementation • 13 Aug 2020 • Kevin Roitero, Michael Soprano, Beatrice Portelli, Damiano Spina, Vincenzo Della Mea, Giuseppe Serra, Stefano Mizzaro, Gianluca Demartini
Misinformation is an ever increasing problem that is difficult to solve for the research community and has a negative impact on the society at large.
no code implementations • 18 Jun 2020 • Edgar Meij, Tara Safavi, Chenyan Xiong, Gianluca Demartini, Miriam Redi, Fatma Özcan
The KG-BIAS 2020 workshop touches on biases and how they surface in knowledge graphs (KGs), biases in the source data that is used to create KGs, methods for measuring or remediating bias in KGs, but also identifying other biases such as how and which languages are represented in automatically constructed KGs or how personal KGs might incur inherent biases.
1 code implementation • 14 May 2020 • Kevin Roitero, Michael Soprano, Shaoyang Fan, Damiano Spina, Stefano Mizzaro, Gianluca Demartini
Truthfulness judgments are a fundamental step in the process of fighting misinformation, as they are crucial to train and evaluate classifiers that automatically distinguish true and false statements.
no code implementations • 26 Oct 2017 • Alessandro Checco, Gianluca Demartini, Alexander Loeser, Ines Arous, Mourad Khayati, Matthias Dantone, Richard Koopmanschap, Svetlin Stalinov, Martin Kersten, Ying Zhang
A core business in the fashion industry is the understanding and prediction of customer needs and trends.