no code implementations • 7 Oct 2024 • Juno Prent, Masoud Mansoury
Recommender Systems (RS) often suffer from popularity bias, where a small set of popular items dominate the recommendation results due to their high interaction rates, leaving many less popular items overlooked.
1 code implementation • 8 Aug 2024 • Masoud Mansoury, Bamshad Mobasher, Herke van Hoof
In this paper, we study exposure bias in a class of well-known contextual bandit algorithms known as Linear Cascading Bandits.
1 code implementation • 16 May 2024 • Kun Lin, Masoud Mansoury, Farzad Eskandanian, Milad Sabouri, Bamshad Mobasher
Calibration in recommender systems is an important performance criterion that ensures consistency between the distribution of user preference categories and that of recommendations generated by the system.
1 code implementation • 29 Apr 2024 • Jin Huang, Harrie Oosterhuis, Masoud Mansoury, Herke van Hoof, Maarten de Rijke
Debiasing methods aim to mitigate the effect of selection bias on the evaluation and optimization of RSs.
no code implementations • 4 Oct 2023 • Masoud Mansoury, Finn Duijvestijn, Imane Mourabet
Users with different degree of tolerance toward popular items are not fairly served by the recommendation system: users interested in less popular items receive more popular items in their recommendations, while users interested in popular items are recommended what they want.
1 code implementation • 18 Sep 2023 • Maria Heuss, Daniel Cohen, Masoud Mansoury, Maarten de Rijke, Carsten Eickhoff
Prior work on bias mitigation often assumes that ranking scores, which correspond to the utility that a document holds for a user, can be accurately determined.
no code implementations • 11 Sep 2023 • Spyros Avlonitis, Dor Lavi, Masoud Mansoury, David Graus
This study explores the potential of reinforcement learning algorithms to enhance career planning processes.
no code implementations • 5 Sep 2023 • Masoud Mansoury, Bamshad Mobasher
However, less work has been done on addressing exposure bias in a dynamic recommendation setting where the system is operating over time, the recommendation model and the input data are dynamically updated with ongoing user feedback on recommended items at each round.
no code implementations • 4 Sep 2022 • Masoud Mansoury, Bamshad Mobasher, Herke van Hoof
This is especially problematic when bias is amplified over time as a few items (e. g., popular ones) are repeatedly over-represented in recommendation lists and users' interactions with those items will amplify bias towards those items over time resulting in a feedback loop.
no code implementations • 10 Nov 2021 • Masoud Mansoury
The experiments on different publicly-available datasets and comparison with various baselines confirm the superiority of the proposed solutions in improving the exposure fairness for items and suppliers.
no code implementations • 7 Aug 2021 • Masoud Mansoury, Himan Abdollahpouri, Bamshad Mobasher, Mykola Pechenizkiy, Robin Burke, Milad Sabouri
This is especially problematic when bias is amplified over time as a few popular items are repeatedly over-represented in recommendation lists.
no code implementations • 7 Jul 2021 • Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, Robin Burke
Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research.
no code implementations • 10 Mar 2021 • Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher, Edward Malthouse
In this paper, we show the limitations of the existing metrics to evaluate popularity bias mitigation when we want to assess these algorithms from the users' perspective and we propose a new metric that can address these limitations.
no code implementations • 21 Aug 2020 • Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher
Moreover, we show that the more a group is affected by the algorithmic popularity bias, the more their recommendations are miscalibrated.
no code implementations • 25 Jul 2020 • Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, Robin Burke
Recommendation algorithms are known to suffer from popularity bias; a few popular items are recommended frequently while the majority of other items are ignored.
no code implementations • 23 Jul 2020 • Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher
The effectiveness of these approaches, however, has not been assessed in multistakeholder environments where in addition to the users who receive the recommendations, the utility of the suppliers of the recommended items should also be considered.
1 code implementation • 29 Jun 2020 • Himan Abdollahpouri, Masoud Mansoury
Using several recommendation algorithms and two publicly available datasets in music and movie domains, we empirically show the inherent popularity bias of the algorithms and how this bias impacts different stakeholders such as users and suppliers of the items.
no code implementations • 3 May 2020 • Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, Robin Burke
That leads to low coverage of items in recommendation lists across users (i. e. low aggregate diversity) and unfair distribution of recommended items.
no code implementations • 25 Mar 2020 • Himan Abdollahpouri, Robin Burke, Masoud Mansoury
It is well-known that the recommendation algorithms suffer from popularity bias; few popular items are over-recommended which leads to the majority of other items not getting proportionate attention.
no code implementations • 18 Feb 2020 • Masoud Mansoury, Himan Abdollahpouri, Jessie Smith, Arman Dehpanah, Mykola Pechenizkiy, Bamshad Mobasher
The proliferation of personalized recommendation technologies has raised concerns about discrepancies in their recommendation performance across different genders, age groups, and racial or ethnic populations.
no code implementations • 3 Nov 2019 • Masoud Mansoury, Himan Abdollahpouri, Joris Rombouts, Mykola Pechenizkiy
In this paper, we aim to explore the relationship between the consistency of users' ratings behavior and the degree of calibrated recommendations they receive.
no code implementations • 13 Oct 2019 • Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher
In this paper, we use a metric called miscalibration for measuring how a recommendation algorithm is responsive to users' true preferences and we consider how various algorithms may result in different degrees of miscalibration.
1 code implementation • 2 Aug 2019 • Masoud Mansoury, Bamshad Mobasher, Robin Burke, Mykola Pechenizkiy
Research on fairness in machine learning has been recently extended to recommender systems.
3 code implementations • 31 Jul 2019 • Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher
Recommender systems are known to suffer from the popularity bias problem: popular (i. e. frequently rated) items get a lot of exposure while less popular ones are under-represented in the recommendations.
no code implementations • 10 Jul 2019 • Masoud Mansoury, Robin Burke, Bamshad Mobasher
This transformation flattens the rating distribution, better compensates for differences in rating distributions, and improves recommendation performance.