Search Results for author: Stephan Clemencon

Found 4 papers, 1 papers with code

Towards More Robust NLP System Evaluation: Handling Missing Scores in Benchmarks

no code implementations17 May 2023 Anas Himmi, Ekhine Irurozki, Nathan Noiry, Stephan Clemencon, Pierre Colombo

This paper formalize an existing problem in NLP research: benchmarking when some systems scores are missing on the task, and proposes a novel approach to address it.

Benchmarking

What are the best systems? New perspectives on NLP Benchmarking

1 code implementation8 Feb 2022 Pierre Colombo, Nathan Noiry, Ekhine Irurozki, Stephan Clemencon

In Machine Learning, a benchmark refers to an ensemble of datasets associated with one or multiple metrics together with a way to aggregate different systems performances.

Benchmarking

Learning an Ethical Module for Bias Mitigation of pre-trained Models

no code implementations29 Sep 2021 Jean-Rémy Conti, Nathan Noiry, Stephan Clemencon, Vincent Despiegel, Stéphane Gentric

In spite of the high performance and reliability of deep learning algorithms in broad range everyday applications, many investigations tend to show that a lot of models exhibit biases, discriminating against some subgroups of the population.

AUC Optimisation and Collaborative Filtering

no code implementations25 Aug 2015 Charanpal Dhanjal, Romaric Gaudel, Stephan Clemencon

With this in mind, we propose a class of objective functions over matrix factorisations which primarily represent a smooth surrogate for the real AUC, and in a special case we show how to prioritise the top of the list.

Collaborative Filtering Recommendation Systems

Cannot find the paper you are looking for? You can Submit a new open access paper.