Search Results for author: Matthew Wicker

Found 21 papers, 17 papers with code

Adversarial Robustness Certification for Bayesian Neural Networks

1 code implementation23 Jun 2023 Matthew Wicker, Andrea Patane, Luca Laurenti, Marta Kwiatkowska

We study the problem of certifying the robustness of Bayesian neural networks (BNNs) to adversarial input perturbations.

Adversarial Robustness Collision Avoidance +2

Use Perturbations when Learning from Explanations

1 code implementation NeurIPS 2023 Juyeon Heo, Vihari Piratla, Matthew Wicker, Adrian Weller

Machine learning from explanations (MLX) is an approach to learning that uses human-provided explanations of relevant or irrelevant features for each input to ensure that model predictions are right for the right reasons.

Robust Explanation Constraints for Neural Networks

1 code implementation16 Dec 2022 Matthew Wicker, Juyeon Heo, Luca Costabello, Adrian Weller

Post-hoc explanation methods are used with the intent of providing insights about neural networks and are sometimes said to help engender trust in their outputs.

Emergent Linguistic Structures in Neural Networks are Fragile

1 code implementation31 Oct 2022 Emanuele La Malfa, Matthew Wicker, Marta Kwiatkowska

In this paper, focusing on the ability of language models to represent syntax, we propose a framework to assess the consistency and robustness of linguistic representations.

Language Modelling

On the Robustness of Bayesian Neural Networks to Adversarial Attacks

2 code implementations13 Jul 2022 Luca Bortolussi, Ginevra Carbone, Luca Laurenti, Andrea Patane, Guido Sanguinetti, Matthew Wicker

Despite significant efforts, both practical and theoretical, training deep learning models robust to adversarial attacks is still an open problem.

Variational Inference

Individual Fairness Guarantees for Neural Networks

1 code implementation11 May 2022 Elias Benussi, Andrea Patane, Matthew Wicker, Luca Laurenti, Marta Kwiatkowska

We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs).

Benchmarking Fairness

Tractable Uncertainty for Structure Learning

no code implementations29 Apr 2022 Benjie Wang, Matthew Wicker, Marta Kwiatkowska

Bayesian structure learning allows one to capture uncertainty over the causal directed acyclic graph (DAG) responsible for generating given data.

Certification of Iterative Predictions in Bayesian Neural Networks

1 code implementation21 May 2021 Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska

We consider the problem of computing reach-avoid probabilities for iterative predictions made with Bayesian neural network (BNN) models.

Reinforcement Learning (RL)

Bayesian Inference with Certifiable Adversarial Robustness

1 code implementation10 Feb 2021 Matthew Wicker, Luca Laurenti, Andrea Patane, Zhoutong Chen, Zheng Zhang, Marta Kwiatkowska

We consider adversarial training of deep neural networks through the lens of Bayesian learning, and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees.

Adversarial Robustness Bayesian Inference

Gradient-Free Adversarial Attacks for Bayesian Neural Networks

1 code implementation pproximateinference AABI Symposium 2021 Matthew Yuan, Matthew Wicker, Luca Laurenti

In particular, we consider genetic algorithms, surrogate models, as well as zeroth order optimization methods and adapt them to the goal of finding adversarial examples for BNNs.

Adversarial Robustness Bayesian Inference

Probabilistic Safety for Bayesian Neural Networks

1 code implementation21 Apr 2020 Matthew Wicker, Luca Laurenti, Andrea Patane, Marta Kwiatkowska

We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations.

Collision Avoidance

Robustness of Bayesian Neural Networks to Gradient-Based Attacks

1 code implementation NeurIPS 2020 Ginevra Carbone, Matthew Wicker, Luca Laurenti, Andrea Patane, Luca Bortolussi, Guido Sanguinetti

Vulnerability to adversarial attacks is one of the principal hurdles to the adoption of deep learning in safety-critical applications.

Variational Inference

Uncertainty Quantification with Statistical Guarantees in End-to-End Autonomous Driving Control

no code implementations21 Sep 2019 Rhiannon Michelmore, Matthew Wicker, Luca Laurenti, Luca Cardelli, Yarin Gal, Marta Kwiatkowska

Deep neural network controllers for autonomous driving have recently benefited from significant performance improvements, and have begun deployment in the real world.

Autonomous Driving Bayesian Inference +3

Robustness of 3D Deep Learning in an Adversarial Setting

1 code implementation CVPR 2019 Matthew Wicker, Marta Kwiatkowska

Understanding the spatial arrangement and nature of real-world objects is of paramount importance to many complex engineering tasks, including autonomous navigation.

Autonomous Navigation

Statistical Guarantees for the Robustness of Bayesian Neural Networks

1 code implementation5 Mar 2019 Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, Matthew Wicker

We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction differs between the two.

General Classification Image Classification

A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees

1 code implementation10 Jul 2018 Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska

In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations.

Adversarial Attack Adversarial Defense +2

Efficient Learning of Optimal Markov Network Topology with k-Tree Modeling

1 code implementation21 Jan 2018 Liang Ding, Di Chang, Russell Malmberg, Aaron Martinez, David Robinson, Matthew Wicker, Hongfei Yan, Liming Cai

The seminal work of Chow and Liu (1968) shows that approximation of a finite probabilistic system by Markov trees can achieve the minimum information loss with the topology of a maximum spanning tree.

Feature-Guided Black-Box Safety Testing of Deep Neural Networks

no code implementations21 Oct 2017 Matthew Wicker, Xiaowei Huang, Marta Kwiatkowska

In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge.

object-detection Object Detection +2

Cannot find the paper you are looking for? You can Submit a new open access paper.