1 code implementation • 3 Oct 2023 • Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska
Such computed lower bounds provide safety certification for the given policy and BNN model.
1 code implementation • 23 Jun 2023 • Matthew Wicker, Andrea Patane, Luca Laurenti, Marta Kwiatkowska
We study the problem of certifying the robustness of Bayesian neural networks (BNNs) to adversarial input perturbations.
1 code implementation • 21 Apr 2023 • Alice Doherty, Matthew Wicker, Luca Laurenti, Andrea Patane
We study Individual Fairness (IF) for Bayesian neural networks (BNNs).
1 code implementation • NeurIPS 2023 • Juyeon Heo, Vihari Piratla, Matthew Wicker, Adrian Weller
Machine learning from explanations (MLX) is an approach to learning that uses human-provided explanations of relevant or irrelevant features for each input to ensure that model predictions are right for the right reasons.
1 code implementation • 16 Dec 2022 • Matthew Wicker, Juyeon Heo, Luca Costabello, Adrian Weller
Post-hoc explanation methods are used with the intent of providing insights about neural networks and are sometimes said to help engender trust in their outputs.
1 code implementation • 31 Oct 2022 • Emanuele La Malfa, Matthew Wicker, Marta Kwiatkowska
In this paper, focusing on the ability of language models to represent syntax, we propose a framework to assess the consistency and robustness of linguistic representations.
2 code implementations • 13 Jul 2022 • Luca Bortolussi, Ginevra Carbone, Luca Laurenti, Andrea Patane, Guido Sanguinetti, Matthew Wicker
Despite significant efforts, both practical and theoretical, training deep learning models robust to adversarial attacks is still an open problem.
1 code implementation • 11 May 2022 • Elias Benussi, Andrea Patane, Matthew Wicker, Luca Laurenti, Marta Kwiatkowska
We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs).
no code implementations • 29 Apr 2022 • Benjie Wang, Matthew Wicker, Marta Kwiatkowska
Bayesian structure learning allows one to capture uncertainty over the causal directed acyclic graph (DAG) responsible for generating given data.
1 code implementation • 21 May 2021 • Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska
We consider the problem of computing reach-avoid probabilities for iterative predictions made with Bayesian neural network (BNN) models.
1 code implementation • 10 Feb 2021 • Matthew Wicker, Luca Laurenti, Andrea Patane, Zhoutong Chen, Zheng Zhang, Marta Kwiatkowska
We consider adversarial training of deep neural networks through the lens of Bayesian learning, and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees.
1 code implementation • pproximateinference AABI Symposium 2021 • Matthew Yuan, Matthew Wicker, Luca Laurenti
In particular, we consider genetic algorithms, surrogate models, as well as zeroth order optimization methods and adapt them to the goal of finding adversarial examples for BNNs.
1 code implementation • 21 Apr 2020 • Matthew Wicker, Luca Laurenti, Andrea Patane, Marta Kwiatkowska
We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations.
1 code implementation • NeurIPS 2020 • Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane, Luca Bortolussi, Guido Sanguinetti
Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications.
no code implementations • 25 Sep 2019 • Luca Laurenti, Andrea Patane, Matthew Wicker, Luca Bortolussi, Luca Cardelli, Marta Kwiatkowska
We investigate global adversarial robustness guarantees for machine learning models.
no code implementations • 21 Sep 2019 • Rhiannon Michelmore, Matthew Wicker, Luca Laurenti, Luca Cardelli, Yarin Gal, Marta Kwiatkowska
Deep neural network controllers for autonomous driving have recently benefited from significant performance improvements, and have begun deployment in the real world.
1 code implementation • CVPR 2019 • Matthew Wicker, Marta Kwiatkowska
Understanding the spatial arrangement and nature of real-world objects is of paramount importance to many complex engineering tasks, including autonomous navigation.
1 code implementation • 5 Mar 2019 • Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, Matthew Wicker
We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction differs between the two.
1 code implementation • 10 Jul 2018 • Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska
In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations.
1 code implementation • 21 Jan 2018 • Liang Ding, Di Chang, Russell Malmberg, Aaron Martinez, David Robinson, Matthew Wicker, Hongfei Yan, Liming Cai
The seminal work of Chow and Liu (1968) shows that approximation of a finite probabilistic system by Markov trees can achieve the minimum information loss with the topology of a maximum spanning tree.
no code implementations • 21 Oct 2017 • Matthew Wicker, Xiaowei Huang, Marta Kwiatkowska
In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge.