no code implementations • 13 Oct 2021 • Tobias Wegel, Felix Assion, David Mickisch, Florens Greßner
Verifying the robustness of classifiers using the Wasserstein metric can be achieved by proving the absence of adversarial examples (certification) or proving their presence (attack).
no code implementations • 9 Nov 2020 • Paul Schwerdtner, Florens Greßner, Nikhil Kapoor, Felix Assion, René Sass, Wiebke Günther, Fabian Hüger, Peter Schlicht
In this paper we propose a framework for assessing the risk associated with deploying a machine learning model in a specified environment.
no code implementations • 5 Feb 2020 • David Mickisch, Felix Assion, Florens Greßner, Wiebke Günther, Mariele Motta
Therefore, we study the minimum distance of data points to the decision boundary and how this margin evolves over the training of a deep neural network.
no code implementations • 17 Jun 2019 • Felix Assion, Peter Schlicht, Florens Greßner, Wiebke Günther, Fabian Hüger, Nico Schmidt, Umair Rasheed
We call this the "attack generator".