Harmful content detection models tend to have higher false positive rates for content from marginalized groups.
Traditionally, recommender systems operate by returning a user a set of items, ranked in order of estimated relevance to that user.
We show that we can use these metrics to identify content suggestion algorithms that contribute more strongly to skewed outcomes between users.
Our results suggest that observational studies derived from user self-selection are a poor alternative to randomized experimentation on online platforms.
no code implementations • 28 Apr 2020 • Luca Belli, Sofia Ira Ktena, Alykhan Tejani, Alexandre Lung-Yut-Fon, Frank Portman, Xiao Zhu, Yuanpu Xie, Akshay Gupta, Michael Bronstein, Amra Delić, Gabriele Sottocornola, Walter Anelli, Nazareno Andrade, Jessie Smith, Wenzhe Shi
Recommender systems constitute the core engine of most social network platforms nowadays, aiming to maximize user satisfaction along with other key business objectives.