Search Results for author: Gianluca Demartini

Found 14 papers, 5 papers with code

Estimating Gender Completeness in Wikipedia

no code implementations17 Jan 2024 Hrishikesh Patel, Tianwa Chen, Ivano Bongiovanni, Gianluca Demartini

Gender imbalance in Wikipedia content is a known challenge which the editor community is actively addressing.

Attribute

Data Bias Management

no code implementations15 May 2023 Gianluca Demartini, Kevin Roitero, Stefano Mizzaro

Due to the widespread use of data-powered systems in our everyday lives, concepts like bias and fairness gained significant attention among researchers and practitioners, in both industry and academia.

Fairness Management

On the Impact of Data Quality on Image Classification Fairness

no code implementations2 May 2023 Aki Barry, Lei Han, Gianluca Demartini

By adding noise to the original datasets, we can explore the relationship between the quality of the training data and the fairness of the output of the models trained on that data.

Decision Making Fairness +1

Perspectives on Large Language Models for Relevance Judgment

no code implementations13 Apr 2023 Guglielmo Faggioli, Laura Dietz, Charles Clarke, Gianluca Demartini, Matthias Hagen, Claudia Hauff, Noriko Kando, Evangelos Kanoulas, Martin Potthast, Benno Stein, Henning Wachsmuth

When asked, large language models (LLMs) like ChatGPT claim that they can assist with relevance judgments but it is not clear whether automated judgments can reliably be used in evaluations of retrieval systems.

Retrieval

Managing Bias in Human-Annotated Data: Moving Beyond Bias Removal

no code implementations26 Oct 2021 Gianluca Demartini, Kevin Roitero, Stefano Mizzaro

Due to the widespread use of data-powered systems in our everyday lives, the notions of bias and fairness gained significant attention among researchers and practitioners, in both industry and academia.

Fairness Management

The Many Dimensions of Truthfulness: Crowdsourcing Misinformation Assessments on a Multidimensional Scale

1 code implementation3 Aug 2021 Michael Soprano, Kevin Roitero, David La Barbera, Davide Ceolin, Damiano Spina, Stefano Mizzaro, Gianluca Demartini

We deploy a set of quality control mechanisms to ensure that the thousands of assessments collected on 180 publicly available fact-checked statements distributed over two datasets are of adequate quality, including a custom search engine used by the crowd workers to find web pages supporting their truthfulness assessments.

Informativeness Misinformation

Can the Crowd Judge Truthfulness? A Longitudinal Study on Recent Misinformation about COVID-19

1 code implementation25 Jul 2021 Kevin Roitero, Michael Soprano, Beatrice Portelli, Massimiliano De Luise, Damiano Spina, Vincenzo Della Mea, Giuseppe Serra, Stefano Mizzaro, Gianluca Demartini

Our results show that: workers are able to detect and objectively categorize online (mis)information related to COVID-19; both crowdsourced and expert judgments can be transformed and aggregated to improve quality; worker background and other signals (e. g., source of information, behavior) impact the quality of the data.

Misinformation

The COVID-19 Infodemic: Can the Crowd Judge Recent Misinformation Objectively?

1 code implementation13 Aug 2020 Kevin Roitero, Michael Soprano, Beatrice Portelli, Damiano Spina, Vincenzo Della Mea, Giuseppe Serra, Stefano Mizzaro, Gianluca Demartini

Misinformation is an ever increasing problem that is difficult to solve for the research community and has a negative impact on the society at large.

Misinformation

Proceedings of the KG-BIAS Workshop 2020 at AKBC 2020

no code implementations18 Jun 2020 Edgar Meij, Tara Safavi, Chenyan Xiong, Gianluca Demartini, Miriam Redi, Fatma Özcan

The KG-BIAS 2020 workshop touches on biases and how they surface in knowledge graphs (KGs), biases in the source data that is used to create KGs, methods for measuring or remediating bias in KGs, but also identifying other biases such as how and which languages are represented in automatically constructed KGs or how personal KGs might incur inherent biases.

Knowledge Graphs

Can The Crowd Identify Misinformation Objectively? The Effects of Judgment Scale and Assessor's Background

1 code implementation14 May 2020 Kevin Roitero, Michael Soprano, Shaoyang Fan, Damiano Spina, Stefano Mizzaro, Gianluca Demartini

Truthfulness judgments are a fundamental step in the process of fighting misinformation, as they are crucial to train and evaluate classifiers that automatically distinguish true and false statements.

Misinformation

Cannot find the paper you are looking for? You can Submit a new open access paper.