1 code implementation • 28 Oct 2024 • Gustavo Escobedo, Christian Ganhör, Stefan Brandl, Mirjam Augstein, Markus Schedl
In widely used neural network-based collaborative filtering models, users' history logs are encoded into latent embeddings that represent the users' preferences.
no code implementations • 29 Sep 2024 • Shahed Masoudian, Markus Frohmann, Navid Rekabsaz, Markus Schedl
Language models frequently inherit societal biases from their training data.
1 code implementation • 26 Sep 2024 • Christian Ganhör, Marta Moscati, Anna Hausberger, Shah Nawaz, Markus Schedl
We show that SiBraR's recommendations are accurate in missing modality scenarios, and that the model is able to map different modalities to the same region of the shared embedding space, hence reducing the modality gap.
no code implementations • 22 Aug 2024 • Markus Schedl, Oleg Lesota, Stefan Brandl, Mohammad Lotfi, Gustavo Junior Escobedo Ticona, Shahed Masoudian
Cognitive biases have been studied in psychology, sociology, and behavioral economics for decades.
1 code implementation • 21 Aug 2024 • Oleg Lesota, Jonas Geiger, Max Walder, Dominik Kowald, Markus Schedl
In addition, users from less represented countries (e. g., Finland) are, in the long term, most affected by the under-representation of their local music in recommendations.
no code implementations • 14 Aug 2024 • Muhammad Saad Saeed, Shah Nawaz, Muhammad Zaigham Zaheer, Muhammad Haris Khan, Karthik Nandakumar, Muhammad Haroon Yousaf, Hassan Sajjad, Tom De Schepper, Markus Schedl
Multimodal networks have demonstrated remarkable performance improvements over their unimodal counterparts.
2 code implementations • 24 Jun 2024 • Markus Frohmann, Igor Sterner, Ivan Vulić, Benjamin Minixhofer, Markus Schedl
We introduce a new model - Segment any Text (SaT) - to solve this problem.
1 code implementation • 17 Jun 2024 • Gustavo Escobedo, Marta Moscati, Peter Muellner, Simone Kopeinik, Dominik Kowald, Elisabeth Lex, Markus Schedl
Previous efforts to address this issue have added or removed parts of users' preferences prior to or during model training to improve privacy, which often leads to decreases in recommendation accuracy.
1 code implementation • 14 Apr 2024 • Muhammad Saad Saeed, Shah Nawaz, Muhammad Salman Tahir, Rohan Kumar Das, Muhammad Zaigham Zaheer, Marta Moscati, Markus Schedl, Muhammad Haris Khan, Karthik Nandakumar, Muhammad Haroon Yousaf
The Face-voice Association in Multilingual Environments (FAME) Challenge 2024 focuses on exploring face-voice association under a unique condition of multilingual scenario.
1 code implementation • 29 Jan 2024 • Shahed Masoudian, Cornelia Volaucnik, Markus Schedl, Navid Rekabsaz
Bias mitigation of Language Models has been the topic of many studies with a recent focus on learning separate modules like adapters for on-demand debiasing.
1 code implementation • 8 Jan 2024 • Peter Müllner, Elisabeth Lex, Markus Schedl, Dominik Kowald
In this work, we study how DP impacts recommendation accuracy and popularity bias, when applied to the training data of state-of-the-art recommendation models.
1 code implementation • 13 Jun 2023 • Shahed Masoudian, Khaled Koutini, Markus Schedl, Gerhard Widmer, Navid Rekabsaz
In the Acoustic Scene Classification task (ASC), domain shift is mainly caused by different recording devices.
3 code implementations • 1 Mar 2023 • Dominik Kowald, Gregor Mayr, Markus Schedl, Elisabeth Lex
However, a study that relates miscalibration and popularity lift to recommendation accuracy across different user groups is still missing.
1 code implementation • 13 Feb 2023 • Deepak Kumar, Oleg Lesota, George Zerveas, Daniel Cohen, Carsten Eickhoff, Markus Schedl, Navid Rekabsaz
Large pre-trained language models contain societal biases and carry along these biases to downstream tasks.
1 code implementation • 23 Jun 2022 • Peter Müllner, Elisabeth Lex, Markus Schedl, Dominik Kowald
User-based KNN recommender systems (UserKNN) utilize the rating data of a target user's k nearest neighbors in the recommendation process.
1 code implementation • 9 Jun 2022 • Christian Ganhör, David Penz, Navid Rekabsaz, Oleg Lesota, Markus Schedl
We conduct experiments on the MovieLens-1M and LFM-2b-DemoBias datasets, and evaluate the effectiveness of the bias mitigation method based on the inability of external attackers in revealing the users' gender information from the model.
1 code implementation • 30 May 2022 • Lukas Hauzenberger, Shahed Masoudian, Deepak Kumar, Markus Schedl, Navid Rekabsaz
Societal biases are reflected in large pre-trained language models and their fine-tuned versions on downstream tasks.
no code implementations • 3 Mar 2022 • Klara Krieg, Emilia Parada-Cabaleiro, Markus Schedl, Navid Rekabsaz
This work investigates the effect of gender-stereotypical biases in the content of retrieved results on the relevance judgement of users/annotators.
1 code implementation • 25 Jan 2022 • Darius Afchar, Alessandro B. Melchiorre, Markus Schedl, Romain Hennequin, Elena V. Epure, Manuel Moussallam
In this article, we discuss how explainability can be addressed in the context of MRSs.
Collaborative Filtering Explainable artificial intelligence +3
1 code implementation • 19 Jan 2022 • Klara Krieg, Emilia Parada-Cabaleiro, Gertraud Medicus, Oleg Lesota, Markus Schedl, Navid Rekabsaz
To facilitate the studies of gender bias in the retrieval results of IR systems, we introduce Gender Representation-Bias for Information Retrieval (Grep-BiasIR), a novel thoroughly-audited dataset consisting of 118 bias-sensitive neutral search queries.
no code implementations • 16 Aug 2021 • Oleg Lesota, Alessandro B. Melchiorre, Navid Rekabsaz, Stefan Brandl, Dominik Kowald, Elisabeth Lex, Markus Schedl
In this work, in contrast, we propose to investigate popularity differences (between the user profile and recommendation list) in terms of median, a variety of statistical moments, as well as similarity measures that consider the entire popularity distributions (Kullback-Leibler divergence and Kendall's tau rank-order correlation).
1 code implementation • 4 Aug 2021 • Markus Reiter-Haas, Emilia Parada-Cabaleiro, Markus Schedl, Elham Motamedi, Marko Tkalcic, Elisabeth Lex
In this paper, we describe a psychology-informed approach to model and predict music relistening behavior that is inspired by studies in music psychology, which relate music preferences to human memory.
no code implementations • 25 Jul 2021 • Yashar Deldjoo, Markus Schedl, Peter Knees
Based on a thorough literature analysis, we first propose an onion model comprising five layers, each of which corresponds to a category of music content we identified: signal, embedded metadata, expert-generated content, user-generated content, and derivative content.
1 code implementation • 25 Jun 2021 • Oleg Lesota, Navid Rekabsaz, Daniel Cohen, Klaus Antonius Grasserbauer, Carsten Eickhoff, Markus Schedl
In contrast to the matching paradigm, the probabilistic nature of generative rankers readily offers a fine-grained measure of uncertainty.
no code implementations • 17 Jun 2021 • Rosie Jones, Hamed Zamani, Markus Schedl, Ching-Wei Chen, Sravana Reddy, Ann Clifton, Jussi Karlgren, Helia Hashemi, Aasish Pappu, Zahra Nazari, Longqi Yang, Oguz Semerci, Hugues Bouchard, Ben Carterette
Podcasts are spoken documents across a wide-range of genres and styles, with growing listenership across the world, and a rapidly lowering barrier to entry for both listeners and creators.
1 code implementation • 28 Apr 2021 • Navid Rekabsaz, Simone Kopeinik, Markus Schedl
In this work, we first provide a novel framework to measure the fairness in the retrieved text contents of ranking models.
1 code implementation • 14 Mar 2021 • Navid Rekabsaz, Oleg Lesota, Markus Schedl, Jon Brassey, Carsten Eickhoff
As such, the collection is one of the few datasets offering the necessary data richness and scale to train neural IR models with a large amount of parameters, and notably the first in the health domain.
1 code implementation • 24 Feb 2021 • Dominik Kowald, Peter Muellner, Eva Zangerle, Christine Bauer, Markus Schedl, Elisabeth Lex
In this paper, we study the characteristics of beyond-mainstream music and music listeners and analyze to what extent these characteristics impact the quality of music recommendations provided.
no code implementations • 11 Sep 2020 • Markus Schedl, Christine Bauer, Wolfgang Reisinger, Dominik Kowald, Elisabeth Lex
To complement and extend these results, the article at hand delivers the following major contributions: First, using state-of-the-art unsupervised learning techniques, we identify and thoroughly investigate (1) country profiles of music preferences on the fine-grained level of music tracks (in contrast to earlier work that relied on music preferences on the artist level) and (2) country archetypes that subsume countries sharing similar patterns of listening preferences.
1 code implementation • 1 May 2020 • Navid Rekabsaz, Markus Schedl
Concerns regarding the footprint of societal biases in information retrieval (IR) systems have been raised in several previous studies.
no code implementations • 24 Mar 2020 • Dominik Kowald, Elisabeth Lex, Markus Schedl
In this paper, we introduce a psychology-inspired approach to model and predict the music genre preferences of different groups of users by utilizing human memory processes.
no code implementations • 24 Dec 2019 • Markus Schedl, Christine Bauer
In this paper, we analyze a large dataset of user-generated music listening events from Last. fm, focusing on users aged 6 to 18 years.
Collaborative Filtering Cultural Vocal Bursts Intensity Prediction +1
no code implementations • 14 Dec 2019 • Christine Bauer, Markus Schedl
We conduct rating prediction experiments in which we tailor recommendations to a user's level of preference for the music mainstream using the proposed 6 mainstreaminess measures.
3 code implementations • 10 Dec 2019 • Dominik Kowald, Markus Schedl, Elisabeth Lex
The recent work of Abdollahpouri et al. in the context of movie recommendations has shown that this popularity bias leads to unfair treatment of both long-tail items as well as users with little interest in popular items.
no code implementations • 29 Aug 2019 • Luca Luciano Costanzo, Yashar Deldjoo, Maurizio Ferrari Dacrema, Markus Schedl, Paolo Cremonesi
To raise awareness of this fact, we investigate differences between explicit user preferences and implicit user profiles.
no code implementations • 23 Jul 2019 • Dominik Kowald, Elisabeth Lex, Markus Schedl
Music recommender systems have become central parts of popular streaming platforms such as Last. fm, Pandora, or Spotify to help users find music that fits their preferences.
no code implementations • 2 Oct 2018 • Hamed Zamani, Markus Schedl, Paul Lamere, Ching-Wei Chen
We further report and analyze the results obtained by the top performing teams in each track and explore the approaches taken by the winners.