Search Results for author: Matthew Mirman

Found 12 papers, 3 papers with code

Differentiable Abstract Interpretation for Provably Robust Neural Networks

1 code implementation ICML 2018 Matthew Mirman, Timon Gehr, Martin Vechev

We introduce a scalable method for training robust neural networks based on abstract interpretation.

A Provable Defense for Deep Residual Networks

1 code implementation29 Mar 2019 Matthew Mirman, Gagandeep Singh, Martin Vechev

We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100.

Adversarial Defense Novel Concepts

Universal Approximation with Certified Networks

1 code implementation ICLR 2020 Maximilian Baader, Matthew Mirman, Martin Vechev

To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.

Fast and Effective Robustness Certification

no code implementations NeurIPS 2018 Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev

We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation.

Training Neural Machines with Trace-Based Supervision

no code implementations ICML 2018 Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevic, Timon Gehr, Martin Vechev

We investigate the effectiveness of trace-based supervision methods for training existing neural abstract machines.

Training Neural Machines with Partial Traces

no code implementations ICLR 2018 Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevich, Timon Gehr, Martin Vechev

We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components.

Online Robustness Training for Deep Reinforcement Learning

no code implementations3 Nov 2019 Marc Fischer, Matthew Mirman, Steven Stalder, Martin Vechev

In deep reinforcement learning (RL), adversarial attacks can trick an agent into unwanted states and disrupt training.

reinforcement-learning Reinforcement Learning (RL)

Robustness Certification of Generative Models

no code implementations30 Apr 2020 Matthew Mirman, Timon Gehr, Martin Vechev

Generative neural networks can be used to specify continuous transformations between images via latent-space interpolation.

Distilled Agent DQN for Provable Adversarial Robustness

no code implementations27 Sep 2018 Matthew Mirman, Marc Fischer, Martin Vechev

As deep neural networks have become the state of the art for solving complex reinforcement learning tasks, susceptibility to perceptual adversarial examples have become a concern.

Adversarial Robustness reinforcement-learning +1

Verification of Generative-Model-Based Visual Transformations

no code implementations25 Sep 2019 Matthew Mirman, Timon Gehr, Martin Vechev

Generative networks are promising models for specifying visual transformations.

The Fundamental Limits of Interval Arithmetic for Neural Networks

no code implementations9 Dec 2021 Matthew Mirman, Maximilian Baader, Martin Vechev

Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning.

valid

LLM Guided Inductive Inference for Solving Compositional Problems

no code implementations20 Sep 2023 Abhigya Sodani, Lauren Moos, Matthew Mirman

While large language models (LLMs) have demonstrated impressive performance in question-answering tasks, their performance is limited when the questions require knowledge that is not included in the model's training data and can only be acquired through direct observation or interaction with the real world.

Problem Decomposition Question Answering

Cannot find the paper you are looking for? You can Submit a new open access paper.