no code implementations • 22 Oct 2022 • Abhinandan Pal, Francesco Ranzato, Caterina Urban, Marco Zanella
We leverage this abstraction in two ways: (1) to enhance the interpretability of SVMs by deriving a novel feature importance measure, called abstract feature importance (AFI), that does not depend in any way on a given dataset of the accuracy of the SVM and is very fast to compute, and (2) for verifying stability, notably individual fairness, of SVMs and producing concrete counterexamples when the verification fails.
1 code implementation • 13 Jul 2022 • Satoshi Munakata, Caterina Urban, Haruki Yokoyama, Koji Yamamoto, Kazuki Munakata
Semantic perturbations can significantly change the saliency-map.
no code implementations • 6 Apr 2021 • Caterina Urban, Antoine Miné
We review state-of-the-art formal methods applied to the emerging field of the verification of machine learning systems.
no code implementations • 4 Jan 2021 • Francesco Ranzato, Caterina Urban, Marco Zanella
We study the problem of formally verifying individual fairness of decision tree ensembles, as well as training tree models which maximize both accuracy and individual fairness.
1 code implementation • 5 Dec 2019 • Caterina Urban, Maria Christakis, Valentin Wüstholz, Fuyuan Zhang
Recently, there is growing concern that machine-learning models, which currently assist or even automate decision making, reproduce, and in the worst case reinforce, bias of the training data.