no code implementations • 18 Jul 2023 • Sabine Muzellec, Thomas Fel, Victor Boutin, Léo Andéol, Rufin VanRullen, Thomas Serre
Attribution methods correspond to a class of explainability methods (XAI) that aim to assess how individual inputs contribute to a model's decision-making process.
no code implementations • 12 Apr 2023 • Léo Andéol, Thomas Fel, Florence De Grancey, Luca Mossina
Deploying deep learning models in real-world certified systems requires the ability to provide confidence estimates that accurately reflect their uncertainty.
no code implementations • 1 Apr 2023 • Hiroki Waida, Yuichiro Wada, Léo Andéol, Takumi Nakagawa, Yuhui Zhang, Takafumi Kanamori
We first prove that the formulation characterizes the structure of representations learned with the kernel-based contrastive learning framework.
no code implementations • 26 Jan 2023 • Léo Andéol, Thomas Fel, Florence De Grancey, Luca Mossina
We present an application of conformal prediction, a form of uncertainty quantification with guarantees, to the detection of railway signals.