no code implementations • 1 Mar 2024 • Thiago Freitas dos Santos, Nardine Osman, Marco Schorlemmer
This paper conducts a user study to assess whether three machine learning (ML) interpretability layouts can influence participants' views when evaluating sentences containing hate speech, focusing on the "Misogyny" and "Racism" classes.
no code implementations • 26 Jan 2023 • Nardine Osman, Bruno Rosell, Carles Sierra, Marco Schorlemmer, Jordi Sabater-Mir, Lissette Lemus
uHelp's intelligent search for volunteers is based on a number of AI technologies: (1) a novel trust-based flooding algorithm that navigates one's social network looking for appropriate trustworthy volunteers; (2) a novel trust model that maintains the trustworthiness of peers by learning from their similar past experiences; and (3) a semantic similarity model that assesses the similarity of experiences.
no code implementations • 30 Apr 2021 • Thiago Freitas dos Santos, Nardine Osman, Marco Schorlemmer
In this paper, we focus on normative systems for online communities.