Search Results for author: Timon Gehr

Found 10 papers, 2 papers with code

Differentiable Abstract Interpretation for Provably Robust Neural Networks

1 code implementation ICML 2018 Matthew Mirman, Timon Gehr, Martin Vechev

We introduce a scalable method for training robust neural networks based on abstract interpretation.

Certifying Geometric Robustness of Neural Networks

1 code implementation NeurIPS 2019 Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev

The use of neural networks in safety-critical computer vision systems calls for their robustness certification against natural geometric transformations (e. g., rotation, scaling).

Fast and Effective Robustness Certification

no code implementations NeurIPS 2018 Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev

We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation.

Training Neural Machines with Trace-Based Supervision

no code implementations ICML 2018 Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevic, Timon Gehr, Martin Vechev

We investigate the effectiveness of trace-based supervision methods for training existing neural abstract machines.

Robustness Certification with Refinement

no code implementations ICLR 2019 Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev

We present a novel approach for verification of neural networks which combines scalable over-approximation methods with precise (mixed integer) linear programming.

Training Neural Machines with Partial Traces

no code implementations ICLR 2018 Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevich, Timon Gehr, Martin Vechev

We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components.

Robustness Certification of Generative Models

no code implementations30 Apr 2020 Matthew Mirman, Timon Gehr, Martin Vechev

Generative neural networks can be used to specify continuous transformations between images via latent-space interpolation.

Provably Robust Adversarial Examples

no code implementations ICLR 2022 Dimitar I. Dimitrov, Gagandeep Singh, Timon Gehr, Martin Vechev

We introduce the concept of provably robust adversarial examples for deep neural networks - connected input regions constructed from standard adversarial examples which are guaranteed to be robust to a set of real-world perturbations (such as changes in pixel intensity and geometric transformations).

Verification of Generative-Model-Based Visual Transformations

no code implementations25 Sep 2019 Matthew Mirman, Timon Gehr, Martin Vechev

Generative networks are promising models for specifying visual transformations.

Cannot find the paper you are looking for? You can Submit a new open access paper.