no code implementations • 14 Feb 2024 • Simon Geisler, Tom Wollschläger, M. H. I. Abdalla, Johannes Gasteiger, Stephan Günnemann
Current LLM alignment methods are readily broken through specifically crafted adversarial prompts.
no code implementations • 9 Dec 2023 • Ege Erdogan, Simon Geisler, Stephan Günnemann
It is well-known that deep learning models are vulnerable to small input perturbations.
no code implementations • 29 Nov 2023 • Filippo Guerranti, Zinuo Yi, Anna Starovoit, Rafiq Kamel, Simon Geisler, Stephan Günnemann
Contrastive learning (CL) has emerged as a powerful framework for learning representations of images and text in a self-supervised manner while enhancing model robustness against adversarial attacks.
no code implementations • 11 Nov 2023 • Jianxiang Feng, JongSeok Lee, Simon Geisler, Stephan Gunnemann, Rudolph Triebel
To facilitate reliable deployments of autonomous robots in the real world, Out-of-Distribution (OOD) detection capabilities are often required.
no code implementations • NeurIPS 2023 • Lukas Gosch, Simon Geisler, Daniel Sturm, Bertrand Charpentier, Daniel Zügner, Stephan Günnemann
Including these contributions, we demonstrate that adversarial training is a state-of-the-art defense against adversarial structure perturbations.
no code implementations • 1 May 2023 • Lukas Gosch, Daniel Sturm, Simon Geisler, Stephan Günnemann
Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure.
no code implementations • 31 Jan 2023 • Felix Mujkanovic, Simon Geisler, Stephan Günnemann, Aleksandar Bojchevski
A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs).
1 code implementation • 31 Jan 2023 • Simon Geisler, Yujia Li, Daniel Mankowitz, Ali Taylan Cemgil, Stephan Günnemann, Cosmin Paduraru
Transformers were originally proposed as a sequence-to-sequence model for text but have become vital for a wide range of modalities, including images, audio, video, and undirected graphs.
Ranked #1 on Graph Property Prediction on ogbg-code2
1 code implementation • 5 Jan 2023 • Yan Scholten, Jan Schuchardt, Simon Geisler, Aleksandar Bojchevski, Stephan Günnemann
To remedy this, we propose novel gray-box certificates that exploit the message-passing principle of GNNs: We randomly intercept messages and carefully analyze the probability that messages from adversarially controlled nodes reach their target nodes.
no code implementations • 9 Jul 2022 • Morgane Ayle, Bertrand Charpentier, John Rachwan, Daniel Zügner, Simon Geisler, Stephan Günnemann
The robustness and anomaly detection capability of neural networks are crucial topics for their safe adoption in the real-world.
1 code implementation • 21 Jun 2022 • John Rachwan, Daniel Zügner, Bertrand Charpentier, Simon Geisler, Morgane Ayle, Stephan Günnemann
Pruning, the task of sparsifying deep neural networks, received increasing attention recently.
2 code implementations • NeurIPS 2021 • Maximilian Stadler, Bertrand Charpentier, Simon Geisler, Daniel Zügner, Stephan Günnemann
GPN outperforms existing approaches for uncertainty estimation in the experiments.
2 code implementations • NeurIPS 2021 • Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, Stephan Günnemann
Graph Neural Networks (GNNs) are increasingly important given their popularity and the diversity of applications.
no code implementations • ICLR 2022 • Simon Geisler, Johanna Sommer, Jan Schuchardt, Aleksandar Bojchevski, Stephan Günnemann
Specifically, most datasets only capture a simpler subproblem and likely suffer from spurious features.
1 code implementation • ICLR 2022 • Bertrand Charpentier, Oliver Borchert, Daniel Zügner, Simon Geisler, Stephan Günnemann
Uncertainty awareness is crucial to develop reliable machine learning models.
1 code implementation • NeurIPS 2020 • Simon Geisler, Daniel Zügner, Stephan Günnemann
Perturbations targeting the graph structure have proven to be extremely effective in reducing the performance of Graph Neural Networks (GNNs), and traditional defenses such as adversarial training do not seem to be able to improve robustness.