Search Results for author: Gianluca Demartini

Found 30 papers, 10 papers with code

How Do Experts Make Sense of Integrated Process Models?

no code implementations27 May 2025 Tianwa Chen, Barbara Weber, Graeme Shanks, Gianluca Demartini, Marta Indulska, Shazia Sadiq

By studying expert process workers engaged in tasks based on integrated modeling of business processes and rules, we provide insights that pave the way for a better understanding of sensemaking practices and improved development of business process and business rule integration approaches.

LLM-based Semantic Augmentation for Harmful Content Detection

no code implementations22 Apr 2025 Elyas Meguellati, Assaad Zeghina, Shazia Sadiq, Gianluca Demartini

Recent advances in large language models (LLMs) have demonstrated strong performance on simple text classification tasks, frequently under zero-shot settings.

Hateful Meme Classification Propaganda detection +2

Spotting Persuasion: A Low-cost Model for Persuasion Detection in Political Ads on Social Media

no code implementations18 Mar 2025 Elyas Meguellati, Stefano Civelli, Pietro Bernardelle, Shazia Sadiq, Gianluca Demartini

In the realm of political advertising, persuasion operates as a pivotal element within the broader framework of propaganda, exerting profound influences on public opinion and electoral outcomes.

Text Detection

Leveraging Semantic Type Dependencies for Clinical Named Entity Recognition

no code implementations7 Mar 2025 Linh Le, Guido Zuccon, Gianluca Demartini, Genghong Zhao, Xia Zhang

Previous work on clinical relation extraction from free-text sentences leveraged information about semantic types from clinical knowledge bases as a part of entity representations.

Clinical Knowledge named-entity-recognition +5

Are Large Language Models Good Data Preprocessors?

no code implementations24 Feb 2025 Elyas Meguellati, Nardiena Pratama, Shazia Sadiq, Gianluca Demartini

High-quality textual training data is essential for the success of multimodal data processing tasks, yet outputs from image captioning models like BLIP and GIT often contain errors and anomalies that are difficult to rectify using rule-based methods.

Image Captioning

Plan-Then-Execute: An Empirical Study of User Trust and Team Performance When Using LLM Agents As A Daily Assistant

1 code implementation3 Feb 2025 Gaole He, Gianluca Demartini, Ujwal Gadiraju

Our findings demonstrate that LLM agents can be a double-edged sword -- (1) they can work well when a high-quality plan and necessary user involvement in execution are available, and (2) users can easily mistrust the LLM agents with plans that seem plausible.

Sequential Decision Making

The Impact of Persona-based Political Perspectives on Hateful Content Detection

no code implementations1 Feb 2025 Stefano Civelli, Pietro Bernardelle, Gianluca Demartini

While pretraining language models with politically diverse content has been shown to improve downstream task fairness, such approaches require significant computational resources often inaccessible to many researchers and organizations.

Diversity Fairness +1

Exploring Wikipedia Gender Diversity Over Time $\unicode{x2013}$ The Wikipedia Gender Dashboard (WGD)

no code implementations22 Jan 2025 Yahya Yunus, Tianwa Chen, Gianluca Demartini

Wikipedia editors can make use of WGD to locate areas with marginalized genders in Wikipedia, and increase their efforts to produce more content providing coverage for those genders to achieve better gender equality in Wikipedia.

Articles Diversity

Mapping and Influencing the Political Ideology of Large Language Models using Synthetic Personas

no code implementations19 Dec 2024 Pietro Bernardelle, Leon Fröhling, Stefano Civelli, Riccardo Lunardi, Kevin Roitero, Gianluca Demartini

The analysis of political biases in large language models (LLMs) has primarily examined these systems as single entities with fixed viewpoints.

Natural Language Processing for the Legal Domain: A Survey of Tasks, Datasets, Models, and Challenges

no code implementations25 Oct 2024 Farid Ariai, Gianluca Demartini

Natural Language Processing (NLP) is revolutionising the way legal professionals and laypersons operate in the legal field.

Argument Mining Document Summarization +5

Optimizing LLMs with Direct Preferences: A Data Efficiency Perspective

no code implementations22 Oct 2024 Pietro Bernardelle, Gianluca Demartini

Aligning the output of Large Language Models (LLMs) with human preferences (e. g., by means of reinforcement learning with human feedback, or RLHF) is essential for ensuring their effectiveness in real-world scenarios.

Question Answering

Personas with Attitudes: Controlling LLMs for Diverse Data Annotation

1 code implementation15 Oct 2024 Leon Fröhling, Gianluca Demartini, Dennis Assenmacher

We present a novel approach for enhancing diversity and control in data annotation tasks by personalizing large language models (LLMs).

Diversity

Fairness without Sensitive Attributes via Knowledge Sharing

1 code implementation27 Sep 2024 Hongliang Ni, Lei Han, Tong Chen, Shazia Sadiq, Gianluca Demartini

While model fairness improvement has been explored previously, existing methods invariably rely on adjusting explicit sensitive attribute values in order to improve model fairness in downstream tasks.

Attribute Fairness

Hate Speech Detection with Generalizable Target-aware Fairness

1 code implementation28 May 2024 Tong Chen, Danny Wang, Xurong Liang, Marten Risius, Gianluca Demartini, Hongzhi Yin

To counter the side effect brought by the proliferation of social media platforms, hate speech detection (HSD) plays a vital role in halting the dissemination of toxic online posts at an early stage.

Fairness Hate Speech Detection

Estimating Gender Completeness in Wikipedia

no code implementations17 Jan 2024 Hrishikesh Patel, Tianwa Chen, Ivano Bongiovanni, Gianluca Demartini

Gender imbalance in Wikipedia content is a known challenge which the editor community is actively addressing.

Attribute

Data Bias Management

no code implementations15 May 2023 Gianluca Demartini, Kevin Roitero, Stefano Mizzaro

Due to the widespread use of data-powered systems in our everyday lives, concepts like bias and fairness gained significant attention among researchers and practitioners, in both industry and academia.

Fairness Management

On the Impact of Data Quality on Image Classification Fairness

no code implementations2 May 2023 Aki Barry, Lei Han, Gianluca Demartini

By adding noise to the original datasets, we can explore the relationship between the quality of the training data and the fairness of the output of the models trained on that data.

Decision Making Fairness +2

Perspectives on Large Language Models for Relevance Judgment

1 code implementation13 Apr 2023 Guglielmo Faggioli, Laura Dietz, Charles Clarke, Gianluca Demartini, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Benno Stein, Henning Wachsmuth

When asked, large language models (LLMs) like ChatGPT claim that they can assist with relevance judgments but it is not clear whether automated judgments can reliably be used in evaluations of retrieval systems.

Retrieval

Managing Bias in Human-Annotated Data: Moving Beyond Bias Removal

no code implementations26 Oct 2021 Gianluca Demartini, Kevin Roitero, Stefano Mizzaro

Due to the widespread use of data-powered systems in our everyday lives, the notions of bias and fairness gained significant attention among researchers and practitioners, in both industry and academia.

Fairness Management

The Many Dimensions of Truthfulness: Crowdsourcing Misinformation Assessments on a Multidimensional Scale

1 code implementation3 Aug 2021 Michael Soprano, Kevin Roitero, David La Barbera, Davide Ceolin, Damiano Spina, Stefano Mizzaro, Gianluca Demartini

We deploy a set of quality control mechanisms to ensure that the thousands of assessments collected on 180 publicly available fact-checked statements distributed over two datasets are of adequate quality, including a custom search engine used by the crowd workers to find web pages supporting their truthfulness assessments.

Informativeness Misinformation

Can the Crowd Judge Truthfulness? A Longitudinal Study on Recent Misinformation about COVID-19

1 code implementation25 Jul 2021 Kevin Roitero, Michael Soprano, Beatrice Portelli, Massimiliano De Luise, Damiano Spina, Vincenzo Della Mea, Giuseppe Serra, Stefano Mizzaro, Gianluca Demartini

Our results show that: workers are able to detect and objectively categorize online (mis)information related to COVID-19; both crowdsourced and expert judgments can be transformed and aggregated to improve quality; worker background and other signals (e. g., source of information, behavior) impact the quality of the data.

Misinformation

The COVID-19 Infodemic: Can the Crowd Judge Recent Misinformation Objectively?

1 code implementation13 Aug 2020 Kevin Roitero, Michael Soprano, Beatrice Portelli, Damiano Spina, Vincenzo Della Mea, Giuseppe Serra, Stefano Mizzaro, Gianluca Demartini

Misinformation is an ever increasing problem that is difficult to solve for the research community and has a negative impact on the society at large.

Misinformation

Proceedings of the KG-BIAS Workshop 2020 at AKBC 2020

no code implementations18 Jun 2020 Edgar Meij, Tara Safavi, Chenyan Xiong, Gianluca Demartini, Miriam Redi, Fatma Özcan

The KG-BIAS 2020 workshop touches on biases and how they surface in knowledge graphs (KGs), biases in the source data that is used to create KGs, methods for measuring or remediating bias in KGs, but also identifying other biases such as how and which languages are represented in automatically constructed KGs or how personal KGs might incur inherent biases.

Knowledge Graphs

Can The Crowd Identify Misinformation Objectively? The Effects of Judgment Scale and Assessor's Background

1 code implementation14 May 2020 Kevin Roitero, Michael Soprano, Shaoyang Fan, Damiano Spina, Stefano Mizzaro, Gianluca Demartini

Truthfulness judgments are a fundamental step in the process of fighting misinformation, as they are crucial to train and evaluate classifiers that automatically distinguish true and false statements.

Misinformation

Cannot find the paper you are looking for? You can Submit a new open access paper.