1 code implementation • 22 Mar 2023 • Morgane Goibert, Clément Calauzènes, Ekhine Irurozki, Stéphan Clémençon
As the issue of robustness in AI systems becomes vital, statistical learning techniques that are reliable even in presence of partly contaminated data have to be developed.
1 code implementation • 4 Nov 2022 • Morgane Goibert, Thomas Ricatte, Elvis Dohmatob
In this paper, we investigate the impact of neural networks (NNs) topology on adversarial robustness.
no code implementations • 25 Mar 2022 • Elvis Dohmatob, Chuan Guo, Morgane Goibert
Finally, we show that if a decision-region is compact, then it admits a universal adversarial perturbation with $L_2$ norm which is $\sqrt{d}$ times smaller than the typical $L_2$ norm of a data point.
no code implementations • 20 Jan 2022 • Morgane Goibert, Stéphan Clémençon, Ekhine Irurozki, Pavlo Mozharovskyi
The concept of median/consensus has been widely investigated in order to provide a statistical summary of ranking data, i. e. realizations of a random permutation $\Sigma$ of a finite set, $\{1,\; \ldots,\; n\}$ with $n\geq 1$ say.
no code implementations • 27 Jun 2019 • Morgane Goibert, Elvis Dohmatob
We study Label-Smoothing as a means for improving adversarial robustness of supervised deep-learning models.