no code implementations • 9 Jan 2025 • Robin Burke, Gediminas Adomavicius, Toine Bogers, Tommaso Di Noia, Dominik Kowald, Julia Neidhardt, Özlem Özgöbek, Maria Soledad Pera, Nava Tintarev, Jürgen Ziegler
Multistakeholder recommender systems are those that account for the impacts and preferences of multiple groups of individuals, not just the end users receiving recommendations.
no code implementations • 9 Oct 2024 • Robin Burke, Morgan Sylvester
We define userist recommendation as an approach to recommender systems framed solely in terms of the relation between the user and system.
no code implementations • 6 Oct 2024 • Amanda Aird, Elena Štefancová, Cassidy All, Amy Voida, Martin Homola, Nicholas Mattei, Robin Burke
Algorithmic fairness in recommender systems requires close attention to the needs of a diverse set of stakeholders that may have competing interests.
no code implementations • 21 Sep 2024 • Elena Stefancova, Cassidy All, Joshua Paup, Martin Homola, Nicholas Mattei, Robin Burke
Synthetic data is a useful resource for algorithmic research.
1 code implementation • 10 Sep 2023 • Amanda Aird, Cassidy All, Paresha Farastu, Elena Stefancova, Joshua Sun, Nicholas Mattei, Robin Burke
Fairness problems in recommender systems often have a complexity in practice that is not adequately captured in simplified research formulations.
no code implementations • 2 Mar 2023 • Amanda Aird, Paresha Farastu, Joshua Sun, Elena Štefancová, Cassidy All, Amy Voida, Nicholas Mattei, Robin Burke
Algorithmic fairness in the context of personalized recommendation presents significantly different challenges to those commonly encountered in classification tasks.
no code implementations • 8 Sep 2022 • Paresha Farastu, Nicholas Mattei, Robin Burke
The concern is that a bossy user may be able to shift the cost of fairness to others, improving their own outcomes and worsening those for others.
no code implementations • 7 Aug 2021 • Masoud Mansoury, Himan Abdollahpouri, Bamshad Mobasher, Mykola Pechenizkiy, Robin Burke, Milad Sabouri
This is especially problematic when bias is amplified over time as a few popular items are repeatedly over-represented in recommendation lists.
no code implementations • 7 Jul 2021 • Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, Robin Burke
Fairness is a critical system-level objective in recommender systems that has been the subject of extensive recent research.
no code implementations • 12 May 2021 • Michael D. Ekstrand, Anubrata Das, Robin Burke, Fernando Diaz
Recommendation, information retrieval, and other information access systems pose unique challenges for investigating and applying the fairness and non-discrimination concepts that have been developed for studying other machine learning systems.
no code implementations • 16 Mar 2021 • Nasim Sonboli, Jessie J. Smith, Florencia Cabral Berenfus, Robin Burke, Casey Fiesler
Even though the previous work in other branches of AI has explored the use of explanations as a tool to increase fairness, this work has not been focused on recommendation.
no code implementations • 10 Mar 2021 • Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher, Edward Malthouse
In this paper, we show the limitations of the existing metrics to evaluate popularity bias mitigation when we want to assess these algorithms from the users' perspective and we propose a new metric that can address these limitations.
1 code implementation • EACL (AdaptNLP) 2021 • Xiaolei Huang, Michael J. Paul, Robin Burke, Franck Dernoncourt, Mark Dredze
In this study, we treat the user interest as domains and empirically examine how the user language can vary across the user factor in three English social media datasets.
no code implementations • 5 Sep 2020 • Nasim Sonboli, Robin Burke, Nicholas Mattei, Farzad Eskandanian, Tian Gao
As recommender systems are being designed and deployed for an increasing number of socially-consequential applications, it has become important to consider what properties of fairness these systems exhibit.
no code implementations • 21 Aug 2020 • Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher
Moreover, we show that the more a group is affected by the algorithmic popularity bias, the more their recommendations are miscalibrated.
no code implementations • 25 Jul 2020 • Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, Robin Burke
Recommendation algorithms are known to suffer from popularity bias; a few popular items are recommended frequently while the majority of other items are ignored.
no code implementations • 23 Jul 2020 • Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher
The effectiveness of these approaches, however, has not been assessed in multistakeholder environments where in addition to the users who receive the recommendations, the utility of the suppliers of the recommended items should also be considered.
no code implementations • 21 May 2020 • Nasim Sonboli, Farzad Eskandanian, Robin Burke, Weiwen Liu, Bamshad Mobasher
In this paper, we present a re-ranking approach to fairness-aware recommendation that learns individual preferences across multiple fairness dimensions and uses them to enhance provider fairness in recommendation results.
no code implementations • 3 May 2020 • Masoud Mansoury, Himan Abdollahpouri, Mykola Pechenizkiy, Bamshad Mobasher, Robin Burke
That leads to low coverage of items in recommendation lists across users (i. e. low aggregate diversity) and unfair distribution of recommended items.
no code implementations • 25 Mar 2020 • Himan Abdollahpouri, Robin Burke, Masoud Mansoury
It is well-known that the recommendation algorithms suffer from popularity bias; few popular items are over-recommended which leads to the majority of other items not getting proportionate attention.
no code implementations • 16 Mar 2020 • Carole-Jean Wu, Robin Burke, Ed H. Chi, Joseph Konstan, Julian McAuley, Yves Raimond, Hao Zhang
Deep learning-based recommendation models are used pervasively and broadly, for example, to recommend movies, products, or other information most relevant to users, in order to enhance the user experience.
no code implementations • 13 Mar 2020 • Jessie Smith, Nasim Sonboli, Casey Fiesler, Robin Burke
Algorithmic fairness for artificial intelligence has become increasingly relevant as these systems become more pervasive in society.
no code implementations • 13 Oct 2019 • Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher
In this paper, we use a metric called miscalibration for measuring how a recommendation algorithm is responsive to users' true preferences and we consider how various algorithms may result in different degrees of miscalibration.
no code implementations • 19 Sep 2019 • Zhu Sun, Qing Guo, Jie Yang, Hui Fang, Guibing Guo, Jie Zhang, Robin Burke
This Research Commentary aims to provide a comprehensive and systematic survey of the recent research on recommender systems with side information.
no code implementations • 13 Sep 2019 • Kun Lin, Nasim Sonboli, Bamshad Mobasher, Robin Burke
Recommender systems are personalized: we expect the results given to a particular user to reflect that user's preferences.
1 code implementation • 2 Aug 2019 • Masoud Mansoury, Bamshad Mobasher, Robin Burke, Mykola Pechenizkiy
Research on fairness in machine learning has been recently extended to recommender systems.
4 code implementations • 31 Jul 2019 • Himan Abdollahpouri, Masoud Mansoury, Robin Burke, Bamshad Mobasher
Recommender systems are known to suffer from the popularity bias problem: popular (i. e. frequently rated) items get a lot of exposure while less popular ones are under-represented in the recommendations.
no code implementations • 30 Jul 2019 • Himan Abdollahpouri, Robin Burke
There is growing research interest in recommendation as a multi-stakeholder problem, one where the interests of multiple parties should be taken into account.
no code implementations • 10 Jul 2019 • Masoud Mansoury, Robin Burke, Bamshad Mobasher
This transformation flattens the rating distribution, better compensates for differences in rating distributions, and improves recommendation performance.
no code implementations • 27 Jun 2019 • Himan Abdollahpouri, Robin Burke
Many recommendation algorithms suffer from popularity bias: a small number of popular items being recommended too frequently, while other items get insufficient exposure.
no code implementations • 1 May 2019 • Himan Abdollahpouri, Gediminas Adomavicius, Robin Burke, Ido Guy, Dietmar Jannach, Toshihiro Kamishima, Jan Krasnodebski, Luiz Pizzato
Recommender systems are personalized information access applications; they are ubiquitous in today's online environment, and effective at finding items that meet user needs and tastes.
no code implementations • 22 Jan 2019 • Himan Abdollahpouri, Robin Burke, Bamshad Mobasher
Many recommender systems suffer from popularity bias: popular items are recommended frequently while less popular, niche products, are recommended rarely or not at all.
1 code implementation • 12 Sep 2018 • Robin Burke, Jackson Kontny, Nasim Sonboli
When evaluating recommender systems for their fairness, it may be necessary to make use of demographic attributes, which are personally sensitive and usually excluded from publicly-available data sets.
Computers and Society
no code implementations • 15 Feb 2018 • Himan Abdollahpouri, Robin Burke, Bamshad Mobasher
Many recommender systems suffer from the popularity bias problem: popular items are being recommended frequently while less popular, niche products, are recommended rarely if not at all.