no code implementations • ICLR 2022 • Dimitar I. Dimitrov, Gagandeep Singh, Timon Gehr, Martin Vechev
We introduce the concept of provably robust adversarial examples for deep neural networks - connected input regions constructed from standard adversarial examples which are guaranteed to be robust to a set of real-world perturbations (such as changes in pixel intensity and geometric transformations).
no code implementations • 30 Apr 2020 • Matthew Mirman, Timon Gehr, Martin Vechev
Generative neural networks can be used to specify continuous transformations between images via latent-space interpolation.
1 code implementation • NeurIPS 2019 • Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev
The use of neural networks in safety-critical computer vision systems calls for their robustness certification against natural geometric transformations (e. g., rotation, scaling).
no code implementations • 25 Sep 2019 • Matthew Mirman, Timon Gehr, Martin Vechev
Generative networks are promising models for specifying visual transformations.
no code implementations • ICLR 2019 • Gagandeep Singh, Timon Gehr, Markus Püschel, Martin Vechev
We present a novel approach for verification of neural networks which combines scalable over-approximation methods with precise (mixed integer) linear programming.
no code implementations • ICLR 2019 • Marc Fischer, Mislav Balunovic, Dana Drachsler-Cohen, Timon Gehr, Ce Zhang, Martin Vechev
We present DL2, a system for training and querying neural networks with logical constraints.
no code implementations • NeurIPS 2018 • Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev
We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation.
1 code implementation • ICML 2018 • Matthew Mirman, Timon Gehr, Martin Vechev
We introduce a scalable method for training robust neural networks based on abstract interpretation.
no code implementations • ICML 2018 • Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevic, Timon Gehr, Martin Vechev
We investigate the effectiveness of trace-based supervision methods for training existing neural abstract machines.
no code implementations • ICLR 2018 • Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevich, Timon Gehr, Martin Vechev
We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components.