Search Results for author: Simon Geisler

Found 23 papers, 11 papers with code

LLM-Safety Evaluations Lack Robustness

no code implementations4 Mar 2025 Tim Beyer, Sophie Xhonneux, Simon Geisler, Gauthier Gidel, Leo Schwinn, Stephan Günnemann

In this paper, we argue that current safety alignment research efforts for large language models are hindered by many intertwined sources of noise, such as small datasets, methodological inconsistencies, and unreliable evaluation setups.

Red Teaming Response Generation +1

REINFORCE Adversarial Attacks on Large Language Models: An Adaptive, Distributional, and Semantic Objective

1 code implementation24 Feb 2025 Simon Geisler, Tom Wollschläger, M. H. I. Abdalla, Vincent Cohen-Addad, Johannes Gasteiger, Stephan Günnemann

While it is often easy to craft prompts that yield a substantial likelihood for the affirmative response, the attacked model frequently does not complete the response in a harmful manner.

Graph Neural Networks for Edge Signals: Orientation Equivariance and Invariance

no code implementations22 Oct 2024 Dominik Fuchsgruber, Tim Poštuvan, Stephan Günnemann, Simon Geisler

Such signals can be categorized as inherently directed, for example, the water flow in a pipe network, and undirected, like the diameter of a pipe.

Electrical Engineering

Explainable Graph Neural Networks Under Fire

1 code implementation10 Jun 2024 Zhong Li, Simon Geisler, Yuhang Wang, Stephan Günnemann, Matthijs van Leeuwen

In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.

Adversarial Attack

Spatio-Spectral Graph Neural Networks

1 code implementation29 May 2024 Simon Geisler, Arthur Kosmala, Daniel Herbst, Stephan Günnemann

Motivated by these limitations, we propose Spatio-Spectral Graph Neural Networks (S$^2$GNNs) -- a new modeling paradigm for Graph Neural Networks (GNNs) that synergistically combines spatially and spectrally parametrized graph filters.

GPU Graph Classification +4

Attacking Large Language Models with Projected Gradient Descent

1 code implementation14 Feb 2024 Simon Geisler, Tom Wollschläger, M. H. I. Abdalla, Johannes Gasteiger, Stephan Günnemann

Current LLM alignment methods are readily broken through specifically crafted adversarial prompts.

On the Adversarial Robustness of Graph Contrastive Learning Methods

no code implementations29 Nov 2023 Filippo Guerranti, Zinuo Yi, Anna Starovoit, Rafiq Kamel, Simon Geisler, Stephan Günnemann

Contrastive learning (CL) has emerged as a powerful framework for learning representations of images and text in a self-supervised manner while enhancing model robustness against adversarial attacks.

Adversarial Robustness Contrastive Learning +2

Topology-Matching Normalizing Flows for Out-of-Distribution Detection in Robot Learning

no code implementations11 Nov 2023 Jianxiang Feng, JongSeok Lee, Simon Geisler, Stephan Gunnemann, Rudolph Triebel

To facilitate reliable deployments of autonomous robots in the real world, Out-of-Distribution (OOD) detection capabilities are often required.

Density Estimation object-detection +3

Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directions

no code implementations NeurIPS 2023 Lukas Gosch, Simon Geisler, Daniel Sturm, Bertrand Charpentier, Daniel Zügner, Stephan Günnemann

Including these contributions, we demonstrate that adversarial training is a state-of-the-art defense against adversarial structure perturbations.

Graph Learning

Revisiting Robustness in Graph Machine Learning

no code implementations1 May 2023 Lukas Gosch, Daniel Sturm, Simon Geisler, Stephan Günnemann

Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure.

Adversarial Robustness

Transformers Meet Directed Graphs

1 code implementation31 Jan 2023 Simon Geisler, Yujia Li, Daniel Mankowitz, Ali Taylan Cemgil, Stephan Günnemann, Cosmin Paduraru

Transformers were originally proposed as a sequence-to-sequence model for text but have become vital for a wide range of modalities, including images, audio, video, and undirected graphs.

graph construction Graph Property Prediction

Are Defenses for Graph Neural Networks Robust?

no code implementations31 Jan 2023 Felix Mujkanovic, Simon Geisler, Stephan Günnemann, Aleksandar Bojchevski

A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs).

Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks

1 code implementation5 Jan 2023 Yan Scholten, Jan Schuchardt, Simon Geisler, Aleksandar Bojchevski, Stephan Günnemann

To remedy this, we propose novel gray-box certificates that exploit the message-passing principle of GNNs: We randomly intercept messages and carefully analyze the probability that messages from adversarially controlled nodes reach their target nodes.

Adversarial Robustness

On the Robustness and Anomaly Detection of Sparse Neural Networks

no code implementations9 Jul 2022 Morgane Ayle, Bertrand Charpentier, John Rachwan, Daniel Zügner, Simon Geisler, Stephan Günnemann

The robustness and anomaly detection capability of neural networks are crucial topics for their safe adoption in the real-world.

Anomaly Detection

Robustness of Graph Neural Networks at Scale

2 code implementations NeurIPS 2021 Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, Stephan Günnemann

Graph Neural Networks (GNNs) are increasingly important given their popularity and the diversity of applications.

Diversity

Reliable Graph Neural Networks via Robust Aggregation

1 code implementation NeurIPS 2020 Simon Geisler, Daniel Zügner, Stephan Günnemann

Perturbations targeting the graph structure have proven to be extremely effective in reducing the performance of Graph Neural Networks (GNNs), and traditional defenses such as adversarial training do not seem to be able to improve robustness.

Cannot find the paper you are looking for? You can Submit a new open access paper.