no code implementations • 22 Mar 2024 • Eduardo Figueiredo, Andrea Patane, Morteza Lahijanian, Luca Laurenti
Uncertainty propagation in non-linear dynamical systems has become a key problem in various fields including control theory and machine learning.
1 code implementation • 3 Oct 2023 • Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska
Such computed lower bounds provide safety certification for the given policy and BNN model.
1 code implementation • 23 Jun 2023 • Matthew Wicker, Andrea Patane, Luca Laurenti, Marta Kwiatkowska
We study the problem of certifying the robustness of Bayesian neural networks (BNNs) to adversarial input perturbations.
1 code implementation • 19 Jun 2023 • Steven Adams, Andrea Patane, Morteza Lahijanian, Luca Laurenti
In this paper, we introduce BNN-DP, an efficient algorithmic framework for analysis of adversarial robustness of Bayesian Neural Networks (BNNs).
1 code implementation • 21 Apr 2023 • Alice Doherty, Matthew Wicker, Luca Laurenti, Andrea Patane
We study Individual Fairness (IF) for Bayesian neural networks (BNNs).
2 code implementations • 13 Jul 2022 • Luca Bortolussi, Ginevra Carbone, Luca Laurenti, Andrea Patane, Guido Sanguinetti, Matthew Wicker
Despite significant efforts, both practical and theoretical, training deep learning models robust to adversarial attacks is still an open problem.
1 code implementation • 11 May 2022 • Elias Benussi, Andrea Patane, Matthew Wicker, Luca Laurenti, Marta Kwiatkowska
We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs).
1 code implementation • 21 May 2021 • Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska
We consider the problem of computing reach-avoid probabilities for iterative predictions made with Bayesian neural network (BNN) models.
1 code implementation • 7 Apr 2021 • Andrea Patane, Arno Blaas, Luca Laurenti, Luca Cardelli, Stephen Roberts, Marta Kwiatkowska
Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications.
1 code implementation • 10 Feb 2021 • Matthew Wicker, Luca Laurenti, Andrea Patane, Zhoutong Chen, Zheng Zhang, Marta Kwiatkowska
We consider adversarial training of deep neural networks through the lens of Bayesian learning, and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees.
1 code implementation • 21 Apr 2020 • Matthew Wicker, Luca Laurenti, Andrea Patane, Marta Kwiatkowska
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations.
1 code implementation • NeurIPS 2020 • Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane, Luca Bortolussi, Guido Sanguinetti
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications.
no code implementations • 29 Nov 2019 • Kyriakos Polymenakos, Luca Laurenti, Andrea Patane, Jan-Peter Calliess, Luca Cardelli, Marta Kwiatkowska, Alessandro Abate, Stephen Roberts
Gaussian Processes (GPs) are widely employed in control and learning because of their principled treatment of uncertainty.
no code implementations • 25 Sep 2019 • Luca Laurenti, Andrea Patane, Matthew Wicker, Luca Bortolussi, Luca Cardelli, Marta Kwiatkowska
We investigate global adversarial robustness guarantees for machine learning models.
1 code implementation • 28 May 2019 • Arno Blaas, Andrea Patane, Luca Laurenti, Luca Cardelli, Marta Kwiatkowska, Stephen Roberts
We apply our method to investigate the robustness of GPC models on a 2D synthetic dataset, the SPAM dataset and a subset of the MNIST dataset, providing comparisons of different GPC training techniques, and show how our method can be used for interpretability analysis.
1 code implementation • 5 Mar 2019 • Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, Matthew Wicker
We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction differs between the two.
1 code implementation • 17 Sep 2018 • Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Andrea Patane
Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and control to biological systems.