no code implementations • 20 Sep 2023 • Abhigya Sodani, Lauren Moos, Matthew Mirman
While large language models (LLMs) have demonstrated impressive performance in question-answering tasks, their performance is limited when the questions require knowledge that is not included in the model's training data and can only be acquired through direct observation or interaction with the real world.
no code implementations • 9 Dec 2021 • Matthew Mirman, Maximilian Baader, Martin Vechev
Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning.
no code implementations • 30 Apr 2020 • Matthew Mirman, Timon Gehr, Martin Vechev
Generative neural networks can be used to specify continuous transformations between images via latent-space interpolation.
no code implementations • 3 Nov 2019 • Marc Fischer, Matthew Mirman, Steven Stalder, Martin Vechev
In deep reinforcement learning (RL), adversarial attacks can trick an agent into unwanted states and disrupt training.
1 code implementation • ICLR 2020 • Maximilian Baader, Matthew Mirman, Martin Vechev
To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.
no code implementations • 25 Sep 2019 • Matthew Mirman, Timon Gehr, Martin Vechev
Generative networks are promising models for specifying visual transformations.
1 code implementation • 29 Mar 2019 • Matthew Mirman, Gagandeep Singh, Martin Vechev
We present a training system, which can provably defend significantly larger neural networks than previously possible, including ResNet-34 and DenseNet-100.
no code implementations • NeurIPS 2018 • Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, Martin Vechev
We present a new method and system, called DeepZ, for certifying neural network robustness based on abstract interpretation.
no code implementations • 27 Sep 2018 • Matthew Mirman, Marc Fischer, Martin Vechev
As deep neural networks have become the state of the art for solving complex reinforcement learning tasks, susceptibility to perceptual adversarial examples have become a concern.
no code implementations • ICML 2018 • Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevic, Timon Gehr, Martin Vechev
We investigate the effectiveness of trace-based supervision methods for training existing neural abstract machines.
1 code implementation • ICML 2018 • Matthew Mirman, Timon Gehr, Martin Vechev
We introduce a scalable method for training robust neural networks based on abstract interpretation.
no code implementations • ICLR 2018 • Matthew Mirman, Dimitar Dimitrov, Pavle Djordjevich, Timon Gehr, Martin Vechev
We present a novel approach for training neural abstract architectures which in- corporates (partial) supervision over the machine’s interpretable components.