Search Results for author: Simon Geisler

Found 16 papers, 7 papers with code

Attacking Large Language Models with Projected Gradient Descent

no code implementations14 Feb 2024 Simon Geisler, Tom Wollschläger, M. H. I. Abdalla, Johannes Gasteiger, Stephan Günnemann

Current LLM alignment methods are readily broken through specifically crafted adversarial prompts.

On the Adversarial Robustness of Graph Contrastive Learning Methods

no code implementations29 Nov 2023 Filippo Guerranti, Zinuo Yi, Anna Starovoit, Rafiq Kamel, Simon Geisler, Stephan Günnemann

Contrastive learning (CL) has emerged as a powerful framework for learning representations of images and text in a self-supervised manner while enhancing model robustness against adversarial attacks.

Adversarial Robustness Contrastive Learning +2

Topology-Matching Normalizing Flows for Out-of-Distribution Detection in Robot Learning

no code implementations11 Nov 2023 Jianxiang Feng, JongSeok Lee, Simon Geisler, Stephan Gunnemann, Rudolph Triebel

To facilitate reliable deployments of autonomous robots in the real world, Out-of-Distribution (OOD) detection capabilities are often required.

Density Estimation object-detection +3

Adversarial Training for Graph Neural Networks: Pitfalls, Solutions, and New Directions

no code implementations NeurIPS 2023 Lukas Gosch, Simon Geisler, Daniel Sturm, Bertrand Charpentier, Daniel Zügner, Stephan Günnemann

Including these contributions, we demonstrate that adversarial training is a state-of-the-art defense against adversarial structure perturbations.

Graph Learning

Revisiting Robustness in Graph Machine Learning

no code implementations1 May 2023 Lukas Gosch, Daniel Sturm, Simon Geisler, Stephan Günnemann

Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure.

Adversarial Robustness

Are Defenses for Graph Neural Networks Robust?

no code implementations31 Jan 2023 Felix Mujkanovic, Simon Geisler, Stephan Günnemann, Aleksandar Bojchevski

A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs).

Transformers Meet Directed Graphs

1 code implementation31 Jan 2023 Simon Geisler, Yujia Li, Daniel Mankowitz, Ali Taylan Cemgil, Stephan Günnemann, Cosmin Paduraru

Transformers were originally proposed as a sequence-to-sequence model for text but have become vital for a wide range of modalities, including images, audio, video, and undirected graphs.

graph construction Graph Property Prediction

Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks

1 code implementation5 Jan 2023 Yan Scholten, Jan Schuchardt, Simon Geisler, Aleksandar Bojchevski, Stephan Günnemann

To remedy this, we propose novel gray-box certificates that exploit the message-passing principle of GNNs: We randomly intercept messages and carefully analyze the probability that messages from adversarially controlled nodes reach their target nodes.

Adversarial Robustness

On the Robustness and Anomaly Detection of Sparse Neural Networks

no code implementations9 Jul 2022 Morgane Ayle, Bertrand Charpentier, John Rachwan, Daniel Zügner, Simon Geisler, Stephan Günnemann

The robustness and anomaly detection capability of neural networks are crucial topics for their safe adoption in the real-world.

Anomaly Detection

Robustness of Graph Neural Networks at Scale

2 code implementations NeurIPS 2021 Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, Stephan Günnemann

Graph Neural Networks (GNNs) are increasingly important given their popularity and the diversity of applications.

Reliable Graph Neural Networks via Robust Aggregation

1 code implementation NeurIPS 2020 Simon Geisler, Daniel Zügner, Stephan Günnemann

Perturbations targeting the graph structure have proven to be extremely effective in reducing the performance of Graph Neural Networks (GNNs), and traditional defenses such as adversarial training do not seem to be able to improve robustness.

Cannot find the paper you are looking for? You can Submit a new open access paper.