Search Results for author: Matthew Mirman

Found 11 papers, 3 papers with code

The Fundamental Limits of Interval Arithmetic for Neural Networks

no code implementations9 Dec 2021 Matthew Mirman, Maximilian Baader, Martin Vechev

Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning.

Robustness Certification of Generative Models

no code implementations30 Apr 2020 Matthew Mirman, Timon Gehr, Martin Vechev

Generative neural networks can be used to specify continuous transformations between images via latent-space interpolation.

Online Robustness Training for Deep Reinforcement Learning

no code implementations3 Nov 2019 Marc Fischer, Matthew Mirman, Steven Stalder, Martin Vechev

In deep reinforcement learning (RL), adversarial attacks can trick an agent into unwanted states and disrupt training.

reinforcement-learning

Universal Approximation with Certified Networks

1 code implementation ICLR 2020 Maximilian Baader, Matthew Mirman, Martin Vechev

To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.

Verification of Generative-Model-Based Visual Transformations

no code implementations25 Sep 2019 Matthew Mirman, Timon Gehr, Martin Vechev

Generative networks are promising models for specifying visual transformations.

A Provable Defense for Deep Residual Networks

1 code implementation29 Mar 2019 Matthew Mirman, Gagandeep Singh, Martin Vechev

We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100.

Adversarial Defense Novel Concepts

Fast and Effective Robustness Certification

no code implementations NeurIPS 2018 Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev

We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation.

Distilled Agent DQN for Provable Adversarial Robustness

no code implementations27 Sep 2018 Matthew Mirman, Marc Fischer, Martin Vechev

As deep neural networks have become the state of the art for solving complex reinforcement learning tasks, susceptibility to perceptual adversarial examples have become a concern.

Adversarial Robustness reinforcement-learning

Training Neural Machines with Trace-Based Supervision

no code implementations ICML 2018 Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevic, Timon Gehr, Martin Vechev

We investigate the effectiveness of trace-based supervision methods for training existing neural abstract machines.

Differentiable Abstract Interpretation for Provably Robust Neural Networks

1 code implementation ICML 2018 Matthew Mirman, Timon Gehr, Martin Vechev

We introduce a scalable method for training robust neural networks based on abstract interpretation.

Training Neural Machines with Partial Traces

no code implementations ICLR 2018 Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevich, Timon Gehr, Martin Vechev

We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components.

Cannot find the paper you are looking for? You can Submit a new open access paper.