no code implementations • NeurIPS 2023 • Jan Schuchardt, Yan Scholten, Stephan Günnemann
For the first time, we propose a sound notion of adversarial robustness that accounts for task equivariance.
no code implementations • NeurIPS 2023 • Yan Scholten, Jan Schuchardt, Aleksandar Bojchevski, Stephan Günnemann
Randomized smoothing is a powerful framework for making models provably robust against small changes to their inputs - by guaranteeing robustness of the majority vote when randomly adding noise before classification.
no code implementations • 6 Oct 2023 • Marcel Kollovieh, Lukas Gosch, Yan Scholten, Marten Lienen, Stephan Günnemann
In this work, we introduce Score-Based Adversarial Generation (ScoreAG), a novel framework that leverages the advancements in score-based generative models to generate adversarial examples beyond $\ell_p$-norm constraints, so-called unrestricted adversarial examples, overcoming their limitations.
1 code implementation • 16 Aug 2023 • Francesco Campi, Lukas Gosch, Tom Wollschläger, Yan Scholten, Stephan Günnemann
We perform the first adversarial robustness study into Graph Neural Networks (GNNs) that are provably more powerful than traditional Message Passing Neural Networks (MPNNs).
1 code implementation • 5 Jan 2023 • Yan Scholten, Jan Schuchardt, Simon Geisler, Aleksandar Bojchevski, Stephan Günnemann
To remedy this, we propose novel gray-box certificates that exploit the message-passing principle of GNNs: We randomly intercept messages and carefully analyze the probability that messages from adversarially controlled nodes reach their target nodes.