no code implementations • 6 Dec 2023 • Claudio Zeni, Robert Pinsler, Daniel Zügner, Andrew Fowler, Matthew Horton, Xiang Fu, Sasha Shysheya, Jonathan Crabbé, Lixin Sun, Jake Smith, Bichlien Nguyen, Hannes Schulz, Sarah Lewis, Chin-wei Huang, Ziheng Lu, Yichi Zhou, Han Yang, Hongxia Hao, Jielan Li, Ryota Tomioka, Tian Xie
We further introduce adapter modules to enable fine-tuning towards any given property constraints with a labeled dataset.
no code implementations • NeurIPS 2023 • Lukas Gosch, Simon Geisler, Daniel Sturm, Bertrand Charpentier, Daniel Zügner, Stephan Günnemann
Including these contributions, we demonstrate that adversarial training is a state-of-the-art defense against adversarial structure perturbations.
no code implementations • 2 Jan 2023 • Morgane Ayle, Jan Schuchardt, Lukas Gosch, Daniel Zügner, Stephan Günnemann
We propose to solve this issue by training graph neural networks on disjoint subgraphs of a given training graph.
no code implementations • 9 Jul 2022 • Morgane Ayle, Bertrand Charpentier, John Rachwan, Daniel Zügner, Simon Geisler, Stephan Günnemann
The robustness and anomaly detection capability of neural networks are crucial topics for their safe adoption in the real-world.
1 code implementation • 21 Jun 2022 • John Rachwan, Daniel Zügner, Bertrand Charpentier, Simon Geisler, Morgane Ayle, Stephan Günnemann
Pruning, the task of sparsifying deep neural networks, received increasing attention recently.
1 code implementation • 29 Dec 2021 • François-Xavier Aubet, Daniel Zügner, Jan Gasthaus
Identifying the anomalous points can be a goal on its own (anomaly detection), or a means to improving performance of other time series tasks (e. g. forecasting).
2 code implementations • NeurIPS 2021 • Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, Stephan Günnemann
Graph Neural Networks (GNNs) are increasingly important given their popularity and the diversity of applications.
2 code implementations • NeurIPS 2021 • Maximilian Stadler, Bertrand Charpentier, Simon Geisler, Daniel Zügner, Stephan Günnemann
GPN outperforms existing approaches for uncertainty estimation in the experiments.
no code implementations • ICLR 2022 • Daniel Zügner, Bertrand Charpentier, Morgane Ayle, Sascha Geringer, Stephan Günnemann
We propose a novel probabilistic model over hierarchies on graphs obtained by continuous relaxation of tree-based hierarchies.
no code implementations • 10 Sep 2021 • Daniel Zügner, François-Xavier Aubet, Victor Garcia Satorras, Tim Januschowski, Stephan Günnemann, Jan Gasthaus
We study a recent class of models which uses graph neural networks (GNNs) to improve forecasting in multivariate time series.
1 code implementation • 3 Jul 2021 • Sven Elflein, Bertrand Charpentier, Daniel Zügner, Stephan Günnemann
Several density estimation methods have shown to fail to detect out-of-distribution (OOD) samples by assigning higher likelihoods to anomalous data.
1 code implementation • ICLR 2022 • Bertrand Charpentier, Oliver Borchert, Daniel Zügner, Simon Geisler, Stephan Günnemann
Uncertainty awareness is crucial to develop reliable machine learning models.
1 code implementation • ICLR 2021 • Daniel Zügner, Tobias Kirschstein, Michele Catasta, Jure Leskovec, Stephan Günnemann
Source code (Context) and its parsed abstract syntax tree (AST; Structure) are two complementary representations of the same computer program.
1 code implementation • NeurIPS 2020 • Simon Geisler, Daniel Zügner, Stephan Günnemann
Perturbations targeting the graph structure have proven to be extremely effective in reducing the performance of Graph Neural Networks (GNNs), and traditional defenses such as adversarial training do not seem to be able to improve robustness.
1 code implementation • 28 Oct 2020 • Anna-Kathrin Kopetzki, Bertrand Charpentier, Daniel Zügner, Sandhya Giri, Stephan Günnemann
Dirichlet-based uncertainty (DBU) models are a recent and promising class of uncertainty-aware models.
1 code implementation • NeurIPS 2020 • Bertrand Charpentier, Daniel Zügner, Stephan Günnemann
The posterior distributions learned by PostNet accurately reflect uncertainty for in- and out-of-distribution data -- without requiring access to OOD data at training time.
1 code implementation • 22 Nov 2019 • Alexander Ziller, Julius Hansjakob, Vitalii Rusinov, Daniel Zügner, Peter Vogel, Stephan Günnemann
We release a realistic, diverse, and challenging dataset for object detection on images.
1 code implementation • 28 Jun 2019 • Daniel Zügner, Stephan Günnemann
Recent works show that Graph Neural Networks (GNNs) are highly non-robust with respect to adversarial attacks on both the graph structure and the node attributes, making their outcomes unreliable.
Ranked #21 on Node Classification on Pubmed
1 code implementation • ICLR 2019 • Daniel Zügner, Stephan Günnemann
Deep learning models for graphs have advanced the state of the art on many tasks.
1 code implementation • 21 May 2018 • Daniel Zügner, Amir Akbarnejad, Stephan Günnemann
Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given.
2 code implementations • ICML 2018 • Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, Stephan Günnemann
NetGAN is able to produce graphs that exhibit well-known network patterns without explicitly specifying them in the model definition.
no code implementations • ICLR 2018 • Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, Stephan Günnemann
Moreover, GraphGAN learns a semantic mapping from the latent input space to the generated graph's properties.