Search Results for author: Andrea Patane

Found 17 papers, 14 papers with code

Uncertainty Propagation in Stochastic Systems via Mixture Models with Error Quantification

no code implementations22 Mar 2024 Eduardo Figueiredo, Andrea Patane, Morteza Lahijanian, Luca Laurenti

Uncertainty propagation in non-linear dynamical systems has become a key problem in various fields including control theory and machine learning.

Adversarial Robustness Certification for Bayesian Neural Networks

1 code implementation23 Jun 2023 Matthew Wicker, Andrea Patane, Luca Laurenti, Marta Kwiatkowska

We study the problem of certifying the robustness of Bayesian neural networks (BNNs) to adversarial input perturbations.

Adversarial Robustness Collision Avoidance +2

BNN-DP: Robustness Certification of Bayesian Neural Networks via Dynamic Programming

1 code implementation19 Jun 2023 Steven Adams, Andrea Patane, Morteza Lahijanian, Luca Laurenti

In this paper, we introduce BNN-DP, an efficient algorithmic framework for analysis of adversarial robustness of Bayesian Neural Networks (BNNs).

Adversarial Robustness Computational Efficiency +1

On the Robustness of Bayesian Neural Networks to Adversarial Attacks

2 code implementations13 Jul 2022 Luca Bortolussi, Ginevra Carbone, Luca Laurenti, Andrea Patane, Guido Sanguinetti, Matthew Wicker

Despite significant efforts, both practical and theoretical, training deep learning models robust to adversarial attacks is still an open problem.

Variational Inference

Individual Fairness Guarantees for Neural Networks

1 code implementation11 May 2022 Elias Benussi, Andrea Patane, Matthew Wicker, Luca Laurenti, Marta Kwiatkowska

We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs).

Benchmarking Fairness

Certification of Iterative Predictions in Bayesian Neural Networks

1 code implementation21 May 2021 Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska

We consider the problem of computing reach-avoid probabilities for iterative predictions made with Bayesian neural network (BNN) models.

Reinforcement Learning (RL)

Adversarial Robustness Guarantees for Gaussian Processes

1 code implementation7 Apr 2021 Andrea Patane, Arno Blaas, Luca Laurenti, Luca Cardelli, Stephen Roberts, Marta Kwiatkowska

Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications.

Adversarial Robustness Gaussian Processes

Bayesian Inference with Certifiable Adversarial Robustness

1 code implementation10 Feb 2021 Matthew Wicker, Luca Laurenti, Andrea Patane, Zhoutong Chen, Zheng Zhang, Marta Kwiatkowska

We consider adversarial training of deep neural networks through the lens of Bayesian learning, and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees.

Adversarial Robustness Bayesian Inference

Probabilistic Safety for Bayesian Neural Networks

1 code implementation21 Apr 2020 Matthew Wicker, Luca Laurenti, Andrea Patane, Marta Kwiatkowska

We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations.

Collision Avoidance

Robustness of Bayesian Neural Networks to Gradient-Based Attacks

1 code implementation NeurIPS 2020 Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane, Luca Bortolussi, Guido Sanguinetti

Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications.

Variational Inference

Adversarial Robustness Guarantees for Classification with Gaussian Processes

1 code implementation28 May 2019 Arno Blaas, Andrea Patane, Luca Laurenti, Luca Cardelli, Marta Kwiatkowska, Stephen Roberts

We apply our method to investigate the robustness of GPC models on a 2D synthetic dataset, the SPAM dataset and a subset of the MNIST dataset, providing comparisons of different GPC training techniques, and show how our method can be used for interpretability analysis.

Adversarial Robustness Classification +2

Statistical Guarantees for the Robustness of Bayesian Neural Networks

1 code implementation5 Mar 2019 Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, Matthew Wicker

We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction differs between the two.

General Classification Image Classification

Robustness Guarantees for Bayesian Inference with Gaussian Processes

1 code implementation17 Sep 2018 Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Andrea Patane

Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and control to biological systems.

Bayesian Inference Gaussian Processes

Cannot find the paper you are looking for? You can Submit a new open access paper.