no code implementations • 11 Mar 2024 • Stefan Balauca, Mark Niklas Müller, Yuhao Mao, Maximilian Baader, Marc Fischer, Martin Vechev
Training neural networks with high certified accuracy against adversarial examples remains an open problem despite significant efforts.
no code implementations • 6 Mar 2024 • Dimitar I. Dimitrov, Maximilian Baader, Mark Niklas Müller, Martin Vechev
In this work, we propose \emph{the first algorithm reconstructing whole batches with $b >1$ exactly}.
1 code implementation • 5 Feb 2024 • Jasper Dekoninck, Mark Niklas Müller, Maximilian Baader, Marc Fischer, Martin Vechev
Large language models are widespread, with their performance on benchmarks frequently guiding user preferences for one model over another.
no code implementations • 7 Nov 2023 • Maximilian Baader, Mark Niklas Müller, Yuhao Mao, Martin Vechev
We show that: (i) more advanced relaxations allow a larger class of univariate functions to be expressed as precisely analyzable ReLU networks, (ii) more precise relaxations can allow exponentially larger solution spaces of ReLU networks encoding the same functions, and (iii) even using the most precise single-neuron relaxations, it is impossible to construct precisely analyzable ReLU networks that express multivariate, convex, monotone CPWL functions.
no code implementations • 9 Dec 2021 • Matthew Mirman, Maximilian Baader, Martin Vechev
Interval analysis (or interval bound propagation, IBP) is a popular technique for verifying and training provably robust deep neural networks, a fundamental challenge in the area of reliable machine learning.
1 code implementation • 26 Nov 2021 • Momchil Peychev, Anian Ruoss, Mislav Balunović, Maximilian Baader, Martin Vechev
This enables us to learn individually fair representations that map similar individuals close together by using adversarial training to minimize the distance between their representations.
1 code implementation • 1 Jul 2021 • Marc Fischer, Maximilian Baader, Martin Vechev
We present a new certification method for image and point cloud segmentation based on randomized smoothing.
no code implementations • 12 Feb 2021 • Nikola Jovanović, Mislav Balunović, Maximilian Baader, Martin Vechev
Certified defenses based on convex relaxations are an established technique for training provably robust models.
1 code implementation • 19 Sep 2020 • Anian Ruoss, Maximilian Baader, Mislav Balunović, Martin Vechev
Recent work has exposed the vulnerability of computer vision models to vector field attacks.
1 code implementation • NeurIPS 2020 • Marc Fischer, Maximilian Baader, Martin Vechev
We extend randomized smoothing to cover parameterized transformations (e. g., rotations, translations) and certify robustness in the parameter space (e. g., rotation angle).
1 code implementation • NeurIPS 2019 • Mislav Balunovic, Maximilian Baader, Gagandeep Singh, Timon Gehr, Martin Vechev
The use of neural networks in safety-critical computer vision systems calls for their robustness certification against natural geometric transformations (e. g., rotation, scaling).
1 code implementation • ICLR 2020 • Maximilian Baader, Matthew Mirman, Martin Vechev
To the best of our knowledge, this is the first work to prove the existence of accurate, interval-certified networks.
no code implementations • 25 Sep 2019 • Marc Fischer, Maximilian Baader, Martin Vechev
We present a novel statistical certification method that generalizes prior work based on smoothing to handle richer perturbations.