1 code implementation • 7 Jun 2023 • Yves Rychener, Daniel Kuhn, Tobias Sutter
We develop a principled approach to end-to-end learning in stochastic optimization.
1 code implementation • 30 May 2022 • Yves Rychener, Bahar Taskesen, Daniel Kuhn
This means that the distributions of the predictions within the two groups should be close with respect to the Kolmogorov distance, and fairness is achieved by penalizing the dissimilarity of these two distributions in the objective function of the learning problem.
no code implementations • 24 Dec 2020 • Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki
NLP Interpretability aims to increase trust in model predictions.
1 code implementation • 24 Dec 2020 • Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, Marcin Detyniecki
Current methods for Black-Box NLP interpretability, like LIME or SHAP, are based on altering the text to interpret by removing words and modeling the Black-Box response.