Search Results for author: Marta Kwiatkowska

Found 51 papers, 27 papers with code

Learning Decision Policies with Instrumental Variables through Double Machine Learning

no code implementations14 May 2024 Daqian Shao, Ashkan Soleymani, Francesco Quinzan, Marta Kwiatkowska

A common issue in learning decision-making policies in data-rich settings is spurious correlations in the offline dataset, which can be caused by hidden confounders.

Decision Making regression

HSVI-based Online Minimax Strategies for Partially Observable Stochastic Games with Neural Perception Mechanisms

no code implementations16 Apr 2024 Rui Yan, Gabriel Santos, Gethin Norman, David Parker, Marta Kwiatkowska

For the partially-informed agent, we propose a continual resolving approach which uses lower bounds, pre-computed offline with heuristic search value iteration (HSVI), instead of opponent counterfactual values.


Uncertainty-Aware Explanations Through Probabilistic Self-Explainable Neural Networks

no code implementations20 Mar 2024 Jon Vadillo, Roberto Santana, Jose A. Lozano, Marta Kwiatkowska

The lack of transparency of Deep Neural Networks continues to be a limitation that severely undermines their reliability and usage in high-stakes applications.


Learning Algorithms for Verification of Markov Decision Processes

no code implementations14 Mar 2024 Tomáš Brázdil, Krishnendu Chatterjee, Martin Chmelik, Vojtěch Forejt, Jan Křetínský, Marta Kwiatkowska, Tobias Meggendorfer, David Parker, Mateusz Ujma

The presented framework focuses on probabilistic reachability, which is a core problem in verification, and is instantiated in two distinct scenarios.

STR-Cert: Robustness Certification for Deep Text Recognition on Deep Learning Pipelines and Vision Transformers

no code implementations28 Nov 2023 Daqian Shao, Lukas Fesser, Marta Kwiatkowska

Robustness certification, which aims to formally certify the predictions of neural networks against adversarial inputs, has become an integral part of important tool for safety-critical applications.

Scene Text Recognition

When to Trust AI: Advances and Challenges for Certification of Neural Networks

no code implementations20 Sep 2023 Marta Kwiatkowska, Xiyue Zhang

Artificial intelligence (AI) has been advancing at a fast pace and it is now poised for deployment in a wide range of applications, such as autonomous systems, medical diagnosis and natural language processing.

Medical Diagnosis

Point-based Value Iteration for Neuro-Symbolic POMDPs

no code implementations30 Jun 2023 Rui Yan, Gabriel Santos, Gethin Norman, David Parker, Marta Kwiatkowska

This requires functions over continuous-state beliefs, for which we propose a novel piecewise linear and convex representation (P-PWLC) in terms of polyhedra covering the continuous-state space and value vectors, and extend Bellman backups to this representation.

Collision Avoidance Decision Making +1

Adversarial Robustness Certification for Bayesian Neural Networks

1 code implementation23 Jun 2023 Matthew Wicker, Andrea Patane, Luca Laurenti, Marta Kwiatkowska

We study the problem of certifying the robustness of Bayesian neural networks (BNNs) to adversarial input perturbations.

Adversarial Robustness Collision Avoidance +2

Provable Preimage Under-Approximation for Neural Networks (Full Version)

1 code implementation5 May 2023 Xiyue Zhang, Benjie Wang, Marta Kwiatkowska

Neural network verification mainly focuses on local robustness properties, which can be checked by bounding the image (set of outputs) of a given input set.

Sample Efficient Model-free Reinforcement Learning from LTL Specifications with Optimality Guarantees

1 code implementation2 May 2023 Daqian Shao, Marta Kwiatkowska

Linear Temporal Logic (LTL) is widely used to specify high-level objectives for system policies, and it is highly desirable for autonomous systems to learn the optimal policy with respect to such specifications.

reinforcement-learning Reinforcement Learning (RL)

Compositional Probabilistic and Causal Inference using Tractable Circuit Models

1 code implementation17 Apr 2023 Benjie Wang, Marta Kwiatkowska

Probabilistic circuits (PCs) are a class of tractable probabilistic models, which admit efficient inference routines depending on their structural properties.

Causal Inference

Bayesian Network Models of Causal Interventions in Healthcare Decision Making: Literature Review and Software Evaluation

no code implementations28 Nov 2022 Artem Velikzhanin, Benjie Wang, Marta Kwiatkowska

After describing the search methodology, the selected research papers are briefly reviewed, with the view to identify publicly available models and datasets that are well suited to analysis using the causal interventional analysis software tool developed in Wang B, Lyle C, Kwiatkowska M (2021).

Decision Making

Emergent Linguistic Structures in Neural Networks are Fragile

1 code implementation31 Oct 2022 Emanuele La Malfa, Matthew Wicker, Marta Kwiatkowska

In this paper, focusing on the ability of language models to represent syntax, we propose a framework to assess the consistency and robustness of linguistic representations.

Language Modelling

When are Local Queries Useful for Robust Learning?

no code implementations12 Oct 2022 Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell

We finish by giving robust learning algorithms for halfspaces on $\{0, 1\}^n$ and then obtaining robustness guarantees for halfspaces in $\mathbb{R}^n$ against precision-bounded adversaries.

Robustness of Unsupervised Representation Learning without Labels

1 code implementation8 Oct 2022 Aleksandar Petrov, Marta Kwiatkowska

When used in adversarial training, they improve most unsupervised robustness measures, including certified robustness.

Representation Learning

Learning Dynamics and Generalization in Reinforcement Learning

no code implementations5 Jun 2022 Clare Lyle, Mark Rowland, Will Dabney, Marta Kwiatkowska, Yarin Gal

Solving a reinforcement learning (RL) problem poses two competing challenges: fitting a potentially discontinuous value function, and generalizing well to new observations.

Policy Gradient Methods reinforcement-learning +1

Sample Complexity Bounds for Robustly Learning Decision Lists against Evasion Attacks

no code implementations12 May 2022 Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell

A fundamental problem in adversarial machine learning is to quantify how much training data is needed in the presence of evasion attacks.

PAC learning

Individual Fairness Guarantees for Neural Networks

1 code implementation11 May 2022 Elias Benussi, Andrea Patane, Matthew Wicker, Luca Laurenti, Marta Kwiatkowska

We consider the problem of certifying the individual fairness (IF) of feed-forward neural networks (NNs).

Benchmarking Fairness

Robustness Guarantees for Credal Bayesian Networks via Constraint Relaxation over Probabilistic Circuits

1 code implementation11 May 2022 Hjalmar Wijk, Benjie Wang, Marta Kwiatkowska

In many domains, worst-case guarantees on the performance (e. g., prediction accuracy) of a decision function subject to distributional shifts and uncertainty about the environment are crucial.

Tractable Uncertainty for Structure Learning

no code implementations29 Apr 2022 Benjie Wang, Matthew Wicker, Marta Kwiatkowska

Bayesian structure learning allows one to capture uncertainty over the causal directed acyclic graph (DAG) responsible for generating given data.

Strategy Synthesis for Zero-Sum Neuro-Symbolic Concurrent Stochastic Games

no code implementations13 Feb 2022 Rui Yan, Gabriel Santos, Gethin Norman, David Parker, Marta Kwiatkowska

Second, we introduce two novel representations for the value functions and strategies, constant-piecewise-linear (CON-PWL) and constant-piecewise-constant (CON-PWC) respectively, and propose Minimax-action-free PI by extending a recent PI method based on alternating player choices for finite state spaces to Borel state spaces, which does not require normal-form games to be solved.

The King is Naked: on the Notion of Robustness for Natural Language Processing

1 code implementation13 Dec 2021 Emanuele La Malfa, Marta Kwiatkowska

There is growing evidence that the classical notion of adversarial robustness originally introduced for images has been adopted as a de facto standard by a large part of the NLP research community.

Adversarial Robustness

Certifiers Make Neural Networks Vulnerable to Availability Attacks

no code implementations25 Aug 2021 Tobias Lorenz, Marta Kwiatkowska, Mario Fritz

While this is a key concept towards safe and secure AI, we show for the first time that this approach comes with its own security risks, as such fallback strategies can be deliberately triggered by an adversary.

Data Poisoning

A Language for Modeling And Optimizing Experimental Biological Protocols

no code implementations13 Jun 2021 Luca Cardelli, Marta Kwiatkowska, Luca Laurenti

We should ideally start from an integrated description of both the model and the steps carried out to test it, to concurrently analyze uncertainties in model parameters, equipment tolerances, and data collection.

Certification of Iterative Predictions in Bayesian Neural Networks

1 code implementation21 May 2021 Matthew Wicker, Luca Laurenti, Andrea Patane, Nicola Paoletti, Alessandro Abate, Marta Kwiatkowska

We consider the problem of computing reach-avoid probabilities for iterative predictions made with Bayesian neural network (BNN) models.

Reinforcement Learning (RL)

Provable Guarantees on the Robustness of Decision Rules to Causal Interventions

1 code implementation19 May 2021 Benjie Wang, Clare Lyle, Marta Kwiatkowska

Robustness of decision rules to shifts in the data-generating process is crucial to the successful deployment of decision-making systems.

Decision Making

On Guaranteed Optimal Robust Explanations for NLP Models

1 code implementation8 May 2021 Emanuele La Malfa, Agnieszka Zbrzezny, Rhiannon Michelmore, Nicola Paoletti, Marta Kwiatkowska

We build on abduction-based explanations for ma-chine learning and develop a method for computing local explanations for neural network models in natural language processing (NLP).

Sentiment Analysis

Adversarial Robustness Guarantees for Gaussian Processes

1 code implementation7 Apr 2021 Andrea Patane, Arno Blaas, Luca Laurenti, Luca Cardelli, Stephen Roberts, Marta Kwiatkowska

Gaussian processes (GPs) enable principled computation of model uncertainty, making them attractive for safety-critical applications.

Adversarial Robustness Gaussian Processes

Bayesian Inference with Certifiable Adversarial Robustness

1 code implementation10 Feb 2021 Matthew Wicker, Luca Laurenti, Andrea Patane, Zhoutong Chen, Zheng Zhang, Marta Kwiatkowska

We consider adversarial training of deep neural networks through the lens of Bayesian learning, and present a principled framework for adversarial training of Bayesian Neural Networks (BNNs) with certifiable guarantees.

Adversarial Robustness Bayesian Inference

On the Benefits of Invariance in Neural Networks

no code implementations1 May 2020 Clare Lyle, Mark van der Wilk, Marta Kwiatkowska, Yarin Gal, Benjamin Bloem-Reddy

Many real world data analysis problems exhibit invariant structure, and models that take advantage of this structure have shown impressive empirical performance, particularly in deep learning.

Data Augmentation

Probabilistic Safety for Bayesian Neural Networks

1 code implementation21 Apr 2020 Matthew Wicker, Luca Laurenti, Andrea Patane, Marta Kwiatkowska

We study probabilistic safety for Bayesian Neural Networks (BNNs) under adversarial input perturbations.

Collision Avoidance

Invariant Causal Prediction for Block MDPs

1 code implementation ICML 2020 Amy Zhang, Clare Lyle, Shagun Sodhani, Angelos Filos, Marta Kwiatkowska, Joelle Pineau, Yarin Gal, Doina Precup

Generalization across environments is critical to the successful application of reinforcement learning algorithms to real-world challenges.

Causal Inference Variable Selection

Uncertainty Quantification with Statistical Guarantees in End-to-End Autonomous Driving Control

no code implementations21 Sep 2019 Rhiannon Michelmore, Matthew Wicker, Luca Laurenti, Luca Cardelli, Yarin Gal, Marta Kwiatkowska

Deep neural network controllers for autonomous driving have recently benefited from significant performance improvements, and have begun deployment in the real world.

Autonomous Driving Bayesian Inference +3

On the Hardness of Robust Classification

no code implementations NeurIPS 2019 Pascale Gourdeau, Varun Kanade, Marta Kwiatkowska, James Worrell

However if the adversary is restricted to perturbing $O(\log n)$ bits, then the class of monotone conjunctions can be robustly learned with respect to a general class of distributions (that includes the uniform distribution).

Classification General Classification +2

Adversarial Robustness Guarantees for Classification with Gaussian Processes

1 code implementation28 May 2019 Arno Blaas, Andrea Patane, Luca Laurenti, Luca Cardelli, Marta Kwiatkowska, Stephen Roberts

We apply our method to investigate the robustness of GPC models on a 2D synthetic dataset, the SPAM dataset and a subset of the MNIST dataset, providing comparisons of different GPC training techniques, and show how our method can be used for interpretability analysis.

Adversarial Robustness Classification +2

Robustness of 3D Deep Learning in an Adversarial Setting

1 code implementation CVPR 2019 Matthew Wicker, Marta Kwiatkowska

Understanding the spatial arrangement and nature of real-world objects is of paramount importance to many complex engineering tasks, including autonomous navigation.

Autonomous Navigation

Statistical Guarantees for the Robustness of Bayesian Neural Networks

1 code implementation5 Mar 2019 Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Nicola Paoletti, Andrea Patane, Matthew Wicker

We introduce a probabilistic robustness measure for Bayesian Neural Networks (BNNs), defined as the probability that, given a test point, there exists a point within a bounded set such that the BNN prediction differs between the two.

General Classification Image Classification

Evaluating Uncertainty Quantification in End-to-End Autonomous Driving Control

no code implementations16 Nov 2018 Rhiannon Michelmore, Marta Kwiatkowska, Yarin Gal

A rise in popularity of Deep Neural Networks (DNNs), attributed to more powerful GPUs and widely available datasets, has seen them being increasingly used within safety-critical domains.

Autonomous Driving Self-Driving Cars +1

Robustness Guarantees for Bayesian Inference with Gaussian Processes

1 code implementation17 Sep 2018 Luca Cardelli, Marta Kwiatkowska, Luca Laurenti, Andrea Patane

Bayesian inference and Gaussian processes are widely used in applications ranging from robotics and control to biological systems.

Bayesian Inference Gaussian Processes

A Game-Based Approximate Verification of Deep Neural Networks with Provable Guarantees

1 code implementation10 Jul 2018 Min Wu, Matthew Wicker, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska

In this paper, we study two variants of pointwise robustness, the maximum safe radius problem, which for a given input sample computes the minimum distance to an adversarial example, and the feature robustness problem, which aims to quantify the robustness of individual features to adversarial perturbations.

Adversarial Attack Adversarial Defense +2

Concolic Testing for Deep Neural Networks

2 code implementations30 Apr 2018 Youcheng Sun, Min Wu, Wenjie Ruan, Xiaowei Huang, Marta Kwiatkowska, Daniel Kroening

Concolic testing combines program execution and symbolic analysis to explore the execution paths of a software program.

Global Robustness Evaluation of Deep Neural Networks with Provable Guarantees for the $L_0$ Norm

2 code implementations16 Apr 2018 Wenjie Ruan, Min Wu, Youcheng Sun, Xiaowei Huang, Daniel Kroening, Marta Kwiatkowska

In this paper we focus on the $L_0$ norm and aim to compute, for a trained DNN and an input, the maximal radius of a safe norm ball around the input within which there are no adversarial examples.

Feature-Guided Black-Box Safety Testing of Deep Neural Networks

no code implementations21 Oct 2017 Matthew Wicker, Xiaowei Huang, Marta Kwiatkowska

In this paper, we focus on image classifiers and propose a feature-guided black-box approach to test the safety of deep neural networks that requires no such knowledge.

object-detection Object Detection +2

Safety Verification of Deep Neural Networks

2 code implementations21 Oct 2016 Xiaowei Huang, Marta Kwiatkowska, Sen Wang, Min Wu

Our method works directly with the network code and, in contrast to existing methods, can guarantee that adversarial examples, if they exist, are found for the given region and family of manipulations.

Adversarial Attack Adversarial Defense +3

Cannot find the paper you are looking for? You can Submit a new open access paper.