no code implementations • 4 Nov 2024 • Atakan Seyitoğlu, Aleksei Kuvshinov, Leo Schwinn, Stephan Günnemann

We propose activation steering as a method for exact information retrieval from unlearned LLMs.

no code implementations • 29 Oct 2024 • David Lüdke, Enric Rabasseda Raventós, Marcel Kollovieh, Stephan Günnemann

Point processes model the distribution of random point sets in mathematical spaces, such as spatial and temporal domains, with applications in fields like seismology, neuroscience, and economics.

no code implementations • 22 Oct 2024 • Dominik Fuchsgruber, Tim Poštuvan, Stephan Günnemann, Simon Geisler

Such signals can be categorized as inherently directed, for example, the water flow in a pipe network, and undirected, like the diameter of a pipe.

no code implementations • 13 Oct 2024 • Yan Scholten, Stephan Günnemann

To ensure reliability under calibration poisoning, we construct multiple prediction sets, each calibrated on distinct subsets of the calibration data.

no code implementations • 10 Oct 2024 • Nicholas Gao, Eike Eberhard, Stephan Günnemann

The accuracy of density functional theory hinges on the approximation of non-local contributions to the exchange-correlation (XC) functional.

no code implementations • 4 Oct 2024 • Yan Scholten, Stephan Günnemann, Leo Schwinn

However, we find that deterministic evaluations fail to capture the whole output distribution of a model, yielding inaccurate estimations of model capabilities.

no code implementations • 3 Oct 2024 • Marcel Kollovieh, Marten Lienen, David Lüdke, Leo Schwinn, Stephan Günnemann

Recent advancements in generative modeling, particularly diffusion models, have opened new directions for time series modeling, achieving state-of-the-art performance in forecasting and synthesis.

no code implementations • 2 Aug 2024 • Aman Saxena, Tom Wollschläger, Nicola Franco, Jeanette Miriam Lorenz, Stephan Günnemann

We show that for such schemes, the addition of suitable noise channels is equivalent to evaluating the mean value of the noiseless classifier at the smoothed data, akin to Randomized Smoothing from classical machine learning.

no code implementations • 1 Aug 2024 • Tom Wollschläger, Aman Saxena, Nicola Franco, Jeanette Miriam Lorenz, Stephan Günnemann

Breakthroughs in machine learning (ML) and advances in quantum computing (QC) drive the interdisciplinary field of quantum machine learning to new levels.

no code implementations • 16 Jul 2024 • Philipp Foth, Lukas Gosch, Simon Geisler, Leo Schwinn, Stephan Günnemann

Existing studies have shown that Graph Neural Networks (GNNs) are vulnerable to adversarial attacks.

1 code implementation • 15 Jul 2024 • Lukas Gosch, Mahalakshmi Sabanayagam, Debarghya Ghoshdastidar, Stephan Günnemann

Generalization of machine learning models can be severely compromised by data poisoning, where adversarial changes are applied to the training data.

no code implementations • 17 Jun 2024 • Abdullah Saydemir, Marten Lienen, Stephan Günnemann

A recent study in turbulent flow simulation demonstrated the potential of generative diffusion models for fast 3D surrogate modeling.

no code implementations • 15 Jun 2024 • Mohamed Amine Ketata, Nicholas Gao, Johanna Sommer, Tom Wollschläger, Stephan Günnemann

We introduce a new framework for molecular graph generation with 3D molecular generative models.

no code implementations • 12 Jun 2024 • Tom Wollschläger, Niklas Kemper, Leon Hetzel, Johanna Sommer, Stephan Günnemann

Although recent advances in higher-order Graph Neural Networks (GNNs) improve the theoretical expressiveness and molecular property predictive performance, they often fall short of the empirical performance of models that explicitly use fragment information as inductive bias.

1 code implementation • 10 Jun 2024 • Zhong Li, Simon Geisler, Yuhang Wang, Stephan Günnemann, Matthijs van Leeuwen

In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.

no code implementations • 6 Jun 2024 • Dominik Fuchsgruber, Tom Wollschläger, Stephan Günnemann

We propose GEBM, an energy-based model (EBM) that provides high-quality uncertainty estimates by aggregating energy at different structural levels that naturally arise from graph diffusion.

no code implementations • 29 May 2024 • Simon Geisler, Arthur Kosmala, Daniel Herbst, Stephan Günnemann

Motivated by these limitations, we propose Spatio-Spectral Graph Neural Networks (S$^2$GNNs) -- a new modeling paradigm for Graph Neural Networks (GNNs) that synergistically combines spatially and spectrally parametrized graph filters.

Ranked #1 on Graph Classification on Peptides-func

no code implementations • 28 May 2024 • Leon Götz, Marcel Kollovieh, Stephan Günnemann, Leo Schwinn

Transformer architectures have shown promising results in time series processing.

1 code implementation • 24 May 2024 • Sophie Xhonneux, Alessandro Sordoni, Stephan Günnemann, Gauthier Gidel, Leo Schwinn

We propose a fast adversarial training algorithm (C-AdvUL) composed of two losses: the first makes the model robust on continuous embedding attacks computed on an adversarial behaviour dataset; the second ensures the usefulness of the final model by fine-tuning on utility data.

1 code implementation • 23 May 2024 • Nicholas Gao, Stephan Günnemann

Neural wave functions accomplished unprecedented accuracies in approximating the ground state of many-electron systems, though at a high computational cost.

no code implementations • 18 May 2024 • Sebastian Schmidt, Leonard Schenk, Leo Schwinn, Stephan Günnemann

When applying deep learning models in open-world scenarios, active learning (AL) strategies are crucial for identifying label candidates from a nearly infinite amount of unlabeled data.

no code implementations • 2 May 2024 • Dominik Fuchsgruber, Tom Wollschläger, Bertrand Charpentier, Antonio Oroz, Stephan Günnemann

Uncertainty Sampling is an Active Learning strategy that aims to improve the data efficiency of machine learning models by iteratively acquiring labels of data points with the highest uncertainty.

no code implementations • 8 Mar 2024 • Nicholas Gao, Stephan Günnemann

Recent neural networks demonstrated impressively accurate approximations of electronic ground-state wave functions.

no code implementations • 7 Mar 2024 • Jan Schuchardt, Mihail Stoian, Arthur Kosmala, Stephan Günnemann

Amplification by subsampling is one of the main primitives in machine learning with differential privacy (DP): Training a model on random batches instead of complete datasets results in stronger privacy.

no code implementations • 3 Mar 2024 • Xun Wang, John Rachwan, Stephan Günnemann, Bertrand Charpentier

However, the diverse patterns for coupling parameters, such as residual connections and group convolutions, the diverse deep learning frameworks, and the various time stages at which pruning can be performed make existing pruning methods less adaptable to different architectures, frameworks, and pruning criteria.

1 code implementation • 25 Feb 2024 • Rayen Dhahri, Alexander Immer, Betrand Charpentier, Stephan Günnemann, Vincent Fortuin

Neural network sparsification is a promising avenue to save computational time and memory costs, especially in an age where many successful AI models are becoming too large to na\"ively deploy on consumer hardware.

no code implementations • 14 Feb 2024 • Simon Geisler, Tom Wollschläger, M. H. I. Abdalla, Johannes Gasteiger, Stephan Günnemann

Current LLM alignment methods are readily broken through specifically crafted adversarial prompts.

no code implementations • 9 Dec 2023 • Ege Erdogan, Simon Geisler, Stephan Günnemann

It is well-known that deep learning models are vulnerable to small input perturbations.

1 code implementation • 8 Dec 2023 • Michael Plainer, Hannes Stärk, Charlotte Bunne, Stephan Günnemann

Sampling all possible transition paths between two 3D states of a molecular system has various applications ranging from catalyst design to drug discovery.

no code implementations • NeurIPS 2023 • Jan Schuchardt, Yan Scholten, Stephan Günnemann

For the first time, we propose a sound notion of adversarial robustness that accounts for task equivariance.

no code implementations • 29 Nov 2023 • Filippo Guerranti, Zinuo Yi, Anna Starovoit, Rafiq Kamel, Simon Geisler, Stephan Günnemann

Contrastive learning (CL) has emerged as a powerful framework for learning representations of images and text in a self-supervised manner while enhancing model robustness against adversarial attacks.

no code implementations • NeurIPS 2023 • David Lüdke, Marin Biloš, Oleksandr Shchur, Marten Lienen, Stephan Günnemann

Autoregressive neural networks within the temporal point process (TPP) framework have become the standard for modeling continuous-time event data.

1 code implementation • 30 Oct 2023 • Leo Schwinn, David Dobre, Stephan Günnemann, Gauthier Gidel

Here, one major impediment has been the overestimation of the robustness of new defense approaches due to faulty defense evaluations.

no code implementations • NeurIPS 2023 • Yan Scholten, Jan Schuchardt, Aleksandar Bojchevski, Stephan Günnemann

Randomized smoothing is a powerful framework for making models provably robust against small changes to their inputs - by guaranteeing robustness of the majority vote when randomly adding noise before classification.

no code implementations • 6 Oct 2023 • Marcel Kollovieh, Lukas Gosch, Yan Scholten, Marten Lienen, Stephan Günnemann

In this work, we introduce Score-Based Adversarial Generation (ScoreAG), a novel framework that leverages the advancements in score-based generative models to generate adversarial examples beyond $\ell_p$-norm constraints, so-called unrestricted adversarial examples, overcoming their limitations.

no code implementations • 11 Sep 2023 • Sebastian Schmidt, Stephan Günnemann

We exploited the temporal properties for such image streams in our work and proposed the novel temporal predicted loss (TPL) method.

1 code implementation • 16 Aug 2023 • Francesco Campi, Lukas Gosch, Tom Wollschläger, Yan Scholten, Stephan Günnemann

We perform the first adversarial robustness study into Graph Neural Networks (GNNs) that are provably more powerful than traditional Message Passing Neural Networks (MPNNs).

no code implementations • 9 Aug 2023 • Armin Moin, Atta Badii, Stephan Günnemann, Moharram Challenger

However, the stakeholders with data science and Machine Learning (ML) related concerns, such as data scientists and data engineers, are yet to be included in existing architecture frameworks.

1 code implementation • 17 Jul 2023 • Xuan Zhang, Limei Wang, Jacob Helwig, Youzhi Luo, Cong Fu, Yaochen Xie, Meng Liu, Yuchao Lin, Zhao Xu, Keqiang Yan, Keir Adams, Maurice Weiler, Xiner Li, Tianfan Fu, Yucheng Wang, Haiyang Yu, Yuqing Xie, Xiang Fu, Alex Strasser, Shenglong Xu, Yi Liu, Yuanqi Du, Alexandra Saxton, Hongyi Ling, Hannah Lawrence, Hannes Stärk, Shurui Gui, Carl Edwards, Nicholas Gao, Adriana Ladera, Tailin Wu, Elyssa F. Hofgard, Aria Mansouri Tehrani, Rui Wang, Ameya Daigavane, Montgomery Bohde, Jerry Kurtin, Qian Huang, Tuong Phung, Minkai Xu, Chaitanya K. Joshi, Simon V. Mathis, Kamyar Azizzadenesheli, Ada Fang, Alán Aspuru-Guzik, Erik Bekkers, Michael Bronstein, Marinka Zitnik, Anima Anandkumar, Stefano Ermon, Pietro Liò, Rose Yu, Stephan Günnemann, Jure Leskovec, Heng Ji, Jimeng Sun, Regina Barzilay, Tommi Jaakkola, Connor W. Coley, Xiaoning Qian, Xiaofeng Qian, Tess Smidt, Shuiwang Ji

Advances in artificial intelligence (AI) are fueling a new paradigm of discoveries in natural sciences.

1 code implementation • 10 Jul 2023 • Franziska Schwaiger, Andrea Matic, Karsten Roscher, Stephan Günnemann

The ability to detect learned objects regardless of their appearance is crucial for autonomous systems in real-world applications.

2 code implementations • 3 Jul 2023 • Jianxiang Feng, Matan Atad, Ismael Rodríguez, Maximilian Durner, Stephan Günnemann, Rudolph Triebel

Machine Learning (ML) models in Robotic Assembly Sequence Planning (RASP) need to be introspective on the predicted solutions, i. e. whether they are feasible or not, to circumvent potential efficiency degradation.

no code implementations • NeurIPS 2023 • Lukas Gosch, Simon Geisler, Daniel Sturm, Bertrand Charpentier, Daniel Zügner, Stephan Günnemann

Including these contributions, we demonstrate that adversarial training is a state-of-the-art defense against adversarial structure perturbations.

1 code implementation • 20 Jun 2023 • Tom Wollschläger, Nicholas Gao, Bertrand Charpentier, Mohamed Amine Ketata, Stephan Günnemann

Graph Neural Networks (GNNs) are promising surrogates for quantum mechanical calculations as they establish unprecedented low errors on collections of molecular dynamics (MD) trajectories.

1 code implementation • 30 May 2023 • Leon Hetzel, Johanna Sommer, Bastian Rieck, Fabian Theis, Stephan Günnemann

Recent advances in machine learning for molecules exhibit great potential for facilitating drug discovery from in silico predictions.

1 code implementation • 29 May 2023 • Marten Lienen, David Lüdke, Jan Hansen-Palmus, Stephan Günnemann

On this dataset, we show that our generative model captures the distribution of turbulent flows caused by unseen objects and generates high-quality, realistic samples amenable for downstream applications without access to any initial state.

1 code implementation • 17 May 2023 • Emanuele Rossi, Bertrand Charpentier, Francesco Di Giovanni, Fabrizio Frasca, Stephan Günnemann, Michael Bronstein

Graph Neural Networks (GNNs) have become the de-facto standard tool for modeling relational data.

Ranked #1 on Node Classification on Non-Homophilic (Heterophilic) Graphs on Chameleon (48%/32%/20% fixed splits)

Graph Neural Network Node Classification on Non-Homophilic (Heterophilic) Graphs

no code implementations • 1 May 2023 • Lukas Gosch, Daniel Sturm, Simon Geisler, Stephan Günnemann

Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure.

1 code implementation • 30 Apr 2023 • Nicola Franco, Tom Wollschläger, Benedikt Poggel, Stephan Günnemann, Jeanette Miriam Lorenz

We conduct a detailed analysis for the decomposition of MILP with Benders and Dantzig-Wolfe methods.

no code implementations • 6 Apr 2023 • Jonas Gregor Wiese, Lisa Wimmer, Theodore Papamarkou, Bernd Bischl, Stephan Günnemann, David Rügamer

Bayesian inference in deep neural networks is challenging due to the high-dimensional, strongly multi-modal parameter posterior density landscape.

no code implementations • 4 Apr 2023 • Johanna Sommer, Leon Hetzel, David Lüdke, Fabian Theis, Stephan Günnemann

Machine learning for molecules holds great potential for efficiently exploring the vast chemical space and thus streamlining the drug discovery process by facilitating the design of new therapeutic molecules.

1 code implementation • 3 Apr 2023 • Johannes Getzner, Bertrand Charpentier, Stephan Günnemann

Modern machine learning models have started to consume incredible amounts of energy, thus incurring large carbon footprints (Strubell et al., 2019).

1 code implementation • 10 Mar 2023 • Bertrand Charpentier, Chenxiang Zhang, Stephan Günnemann

Accurate and efficient uncertainty estimation is crucial to build reliable Machine Learning (ML) models capable to provide calibrated uncertainty estimates, generalize and detect Out-Of-Distribution (OOD) datasets.

1 code implementation • 8 Mar 2023 • Arthur Kosmala, Johannes Gasteiger, Nicholas Gao, Stephan Günnemann

Neural architectures that learn potential energy surfaces from molecular data have undergone fast improvement in recent years.

1 code implementation • 8 Feb 2023 • Nicholas Gao, Stephan Günnemann

To overcome this limitation, we present Graph-learned orbital embeddings (Globe), a neural network-based reparametrization method that can adapt neural wave functions to different molecules.

no code implementations • 6 Feb 2023 • Jan Schuchardt, Aleksandar Bojchevski, Johannes Gasteiger, Stephan Günnemann

In tasks like node classification, image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions (a vector of labels) based on a single input, i. e. a single graph, image, or document respectively.

no code implementations • 31 Jan 2023 • Felix Mujkanovic, Simon Geisler, Stephan Günnemann, Aleksandar Bojchevski

A cursory reading of the literature suggests that we have made a lot of progress in designing effective adversarial defenses for Graph Neural Networks (GNNs).

1 code implementation • 31 Jan 2023 • Simon Geisler, Yujia Li, Daniel Mankowitz, Ali Taylan Cemgil, Stephan Günnemann, Cosmin Paduraru

Transformers were originally proposed as a sequence-to-sequence model for text but have become vital for a wide range of modalities, including images, audio, video, and undirected graphs.

Ranked #1 on Graph Property Prediction on ogbg-code2

1 code implementation • 5 Jan 2023 • Yan Scholten, Jan Schuchardt, Simon Geisler, Aleksandar Bojchevski, Stephan Günnemann

To remedy this, we propose novel gray-box certificates that exploit the message-passing principle of GNNs: We randomly intercept messages and carefully analyze the probability that messages from adversarially controlled nodes reach their target nodes.

no code implementations • 2 Jan 2023 • Morgane Ayle, Jan Schuchardt, Lukas Gosch, Daniel Zügner, Stephan Günnemann

We propose to solve this issue by training graph neural networks on disjoint subgraphs of a given training graph.

no code implementations • 18 Dec 2022 • Johannes Gasteiger, Chendi Qian, Stephan Günnemann

Using graph neural networks for large graphs is challenging since there is no clear way of constructing mini-batches.

no code implementations • 25 Nov 2022 • Jan Schuchardt, Stephan Günnemann

Building models that comply with the invariances inherent to different domains, such as invariance under translation or rotation, is a key aspect of applying machine learning to real world problems like molecular property prediction, medical imaging, protein folding or LiDAR classification.

no code implementations • 4 Nov 2022 • Marin Biloš, Kashif Rasul, Anderson Schneider, Yuriy Nevmyvaka, Stephan Günnemann

Temporal data such as time series can be viewed as discretized measurements of the underlying function.

no code implementations • 28 Oct 2022 • Jan Schuchardt, Tom Wollschläger, Aleksandar Bojchevski, Stephan Günnemann

We further show that this approach is beneficial for the larger class of softly local models, where each output is dependent on the entire input but assigns different levels of importance to different input regions (e. g. based on their proximity in the image).

1 code implementation • 22 Oct 2022 • Marten Lienen, Stephan Günnemann

We introduce an ODE solver for the PyTorch ecosystem that can solve multiple ODEs in parallel independently from each other while achieving significant performance gains.

no code implementations • 19 Oct 2022 • Marin Biloš, Emanuel Ramneantu, Stephan Günnemann

Observations made in continuous time are often irregular and contain the missing values across different channels.

no code implementations • 15 Oct 2022 • Raffaele Paolino, Aleksandar Bojchevski, Stephan Günnemann, Gitta Kutyniok, Ron Levie

A powerful framework for studying graphs is to consider them as geometric graphs: nodes are randomly sampled from an underlying metric space, and any pair of nodes is connected if their distance is less than a specified neighborhood radius.

1 code implementation • 16 Sep 2022 • Alexandru Mara, Jefrey Lijffijt, Stephan Günnemann, Tijl De Bie

We find that node classification results are impacted more than network reconstruction ones, that degree-based and label-based attacks are on average the most damaging and that label heterophily can strongly influence attack performance.

no code implementations • 15 Sep 2022 • Jörg Christian Kirchhof, Evgeny Kusmenko, Jonas Ritz, Bernhard Rumpe, Armin Moin, Atta Badii, Stephan Günnemann, Moharram Challenger

In this paper, we propose to adopt the MDE paradigm for the development of Machine Learning (ML)-enabled software systems with a focus on the Internet of Things (IoT) domain.

1 code implementation • 17 Jul 2022 • Jonathan Külz, Andreas Spitz, Ahmad Abu-Akel, Stephan Günnemann, Robert West

There is a widespread belief that the tone of US political language has become more negative recently, in particular when Donald Trump entered politics.

no code implementations • 9 Jul 2022 • Morgane Ayle, Bertrand Charpentier, John Rachwan, Daniel Zügner, Simon Geisler, Stephan Günnemann

The robustness and anomaly detection capability of neural networks are crucial topics for their safe adoption in the real-world.

1 code implementation • 21 Jun 2022 • John Rachwan, Daniel Zügner, Bertrand Charpentier, Simon Geisler, Morgane Ayle, Stephan Günnemann

Pruning, the task of sparsifying deep neural networks, received increasing attention recently.

1 code implementation • CVPR2022W 2022 • Codruţ-Andrei Diaconu, Sudipan Saha, Stephan Günnemann, Xiao Xiang Zhu

Climate change is perhaps the biggest single threat to humankind and the environment, as it severely impacts our terrestrial surface, home to most of the living species.

no code implementations • 3 Jun 2022 • Bertrand Charpentier, Ransalu Senanayake, Mykel Kochenderfer, Stephan Günnemann

Characterizing aleatoric and epistemic uncertainty can be used to speed up learning in a training environment, improve generalization to similar testing environments, and flag unfamiliar behavior in anomalous testing environments.

1 code implementation • 30 May 2022 • Nicholas Gao, Stephan Günnemann

In this work, we address the inference shortcomings by proposing the Potential learning from ab-initio Networks (PlaNet) framework, in which we simultaneously train a surrogate model in addition to the neural wave function.

1 code implementation • 28 Apr 2022 • Leon Hetzel, Simon Böhm, Niki Kilbertus, Stephan Günnemann, Mohammad Lotfollahi, Fabian Theis

Single-cell transcriptomics enabled the study of cellular heterogeneity in response to perturbations at the resolution of individual cells.

no code implementations • 6 Apr 2022 • Johannes Gasteiger, Muhammed Shuaibi, Anuroop Sriram, Stephan Günnemann, Zachary Ulissi, C. Lawrence Zitnick, Abhishek Das

This work investigates this question by first developing the GemNet-OC model based on the large Open Catalyst 2020 (OC20) dataset.

Ranked #1 on Initial Structure to Relaxed Energy (IS2RE) on OC20

no code implementations • 16 Mar 2022 • Poulami Sinhamahapatra, Rajat Koner, Karsten Roscher, Stephan Günnemann

It is essential for safety-critical applications of deep neural networks to determine when new inputs are significantly different from the training distribution.

1 code implementation • ICLR 2022 • Bertrand Charpentier, Simon Kibler, Stephan Günnemann

To this end, DP-DAG samples a DAG by successively (1) sampling a linear ordering of the node and (2) sampling edges consistent with the sampled linear ordering.

1 code implementation • ICLR 2022 • Marten Lienen, Stephan Günnemann

We propose a new method for spatio-temporal forecasting on arbitrarily distributed points.

Graph Neural Network
Interpretability Techniques for Deep Learning
**+3**

no code implementations • 6 Mar 2022 • Armin Moin, Ukrit Wattanavaekin, Alexandra Lungu, Moharram Challenger, Atta Badii, Stephan Günnemann

Developing smart software services requires both Software Engineering and Artificial Intelligence (AI) skills.

no code implementations • 17 Feb 2022 • Oliver Borchert, David Salinas, Valentin Flunkert, Tim Januschowski, Stephan Günnemann

By learning a mapping from forecasting models to performance metrics, we show that our method PARETOSELECT is able to accurately select models from the Pareto front -- alleviating the need to train or evaluate many forecasting models for model selection.

1 code implementation • 17 Feb 2022 • Tong Zhao, Wei Jin, Yozen Liu, Yingheng Wang, Gang Liu, Stephan Günnemann, Neil Shah, Meng Jiang

Overall, our work aims to clarify the landscape of existing literature in graph data augmentation and motivates additional work in this area, providing a helpful resource for researchers and practitioners in the broader graph machine learning domain.

no code implementations • NeurIPS 2021 • Johannes Gasteiger, Chandan Yeshwanth, Stephan Günnemann

We furthermore set the state of the art on ZINC and coordinate-free QM9 by incorporating synthetic coordinates in the SMP and DimeNet++ models.

2 code implementations • NeurIPS 2021 • Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, Stephan Günnemann

Graph Neural Networks (GNNs) are increasingly important given their popularity and the diversity of applications.

2 code implementations • NeurIPS 2021 • Maximilian Stadler, Bertrand Charpentier, Simon Geisler, Daniel Zügner, Stephan Günnemann

GPN outperforms existing approaches for uncertainty estimation in the experiments.

1 code implementation • NeurIPS 2021 • Marin Biloš, Johanna Sommer, Syama Sundar Rangapuram, Tim Januschowski, Stephan Günnemann

Neural ordinary differential equations describe how values change in time.

Ranked #3 on Multivariate Time Series Forecasting on MIMIC-III

no code implementations • ICLR 2022 • Simon Geisler, Johanna Sommer, Jan Schuchardt, Aleksandar Bojchevski, Stephan Günnemann

Specifically, most datasets only capture a simpler subproblem and likely suffer from spurious features.

no code implementations • 11 Oct 2021 • Peter Súkeník, Aleksei Kuvshinov, Stephan Günnemann

We show that in general, the input-dependent smoothing suffers from the curse of dimensionality, forcing the variance function to have low semi-elasticity.

1 code implementation • ICLR 2022 • Nicholas Gao, Stephan Günnemann

Solving the Schr\"odinger equation is key to many quantum mechanical properties.

1 code implementation • NeurIPS Workshop AI4Scien 2021 • Hannes Stärk, Dominique Beaini, Gabriele Corso, Prudencio Tossou, Christian Dallago, Stephan Günnemann, Pietro Liò

Molecular property prediction is one of the fastest-growing applications of deep learning with critical real-world impacts.

no code implementations • ICLR 2022 • Daniel Zügner, Bertrand Charpentier, Morgane Ayle, Sascha Geringer, Stephan Günnemann

We propose a novel probabilistic model over hierarchies on graphs obtained by continuous relaxation of tree-based hierarchies.

no code implementations • 29 Sep 2021 • Hannes Stärk, Dominique Beaini, Gabriele Corso, Prudencio Tossou, Christian Dallago, Stephan Günnemann, Pietro Lio

Molecular property prediction is one of the fastest-growing applications of deep learning with critical real-world impacts.

no code implementations • 29 Sep 2021 • Johannes Klicpera, Chendi Qian, Stephan Günnemann

Training graph neural networks on large graphs is challenging since there is no clear way of how to extract mini batches from connected data.

no code implementations • 29 Sep 2021 • Anna-Kathrin Kopetzki, Jana Obernosterer, Aleksandar Bojchevski, Stephan Günnemann

Our experiments show how adversarial training on the source domain affects robustness on source and target domain, and we propose the first provably robust transfer learning models.

no code implementations • 10 Sep 2021 • Daniel Zügner, François-Xavier Aubet, Victor Garcia Satorras, Tim Januschowski, Stephan Günnemann, Jan Gasthaus

We study a recent class of models which uses graph neural networks (GNNs) to improve forecasting in multivariate time series.

no code implementations • 6 Sep 2021 • Sebastian Bischoff, Stephan Günnemann, Martin Jaggi, Sebastian U. Stich

We consider federated learning (FL), where the training data is distributed across a large number of clients.

1 code implementation • 30 Aug 2021 • Johannes C. Paetzold, Julian McGinnis, Suprosanna Shit, Ivan Ezhov, Paul Büschl, Chinmay Prabhakar, Mihail I. Todorov, Anjany Sekuboyina, Georgios Kaissis, Ali Ertürk, Stephan Günnemann, Bjoern H. Menze

Moreover, we benchmark numerous state-of-the-art graph learning algorithms on the biologically relevant tasks of vessel prediction and vessel classification using the introduced vessel graph dataset.

1 code implementation • 19 Jul 2021 • Rajat Koner, Poulami Sinhamahapatra, Karsten Roscher, Stephan Günnemann, Volker Tresp

A serious problem in image classification is that a trained model might perform well for input data that originates from the same distribution as the data available for model training, but performs much worse for out-of-distribution (OOD) samples.

no code implementations • 14 Jul 2021 • Armin Moin, Moharram Challenger, Atta Badii, Stephan Günnemann

Over the past decade, Artificial Intelligence (AI) has provided enormous new possibilities and opportunities, but also new demands and requirements for software systems.

no code implementations • 14 Jul 2021 • Johannes Gasteiger, Marten Lienen, Stephan Günnemann

The current best practice for computing optimal transport (OT) is via entropy regularization and Sinkhorn iterations.

1 code implementation • 13 Jul 2021 • Rajat Koner, Hang Li, Marcel Hildebrandt, Deepan Das, Volker Tresp, Stephan Günnemann

We conduct an experimental study on the challenging dataset GQA, based on both manually curated and automatically generated scene graphs.

no code implementations • 6 Jul 2021 • Armin Moin, Moharram Challenger, Atta Badii, Stephan Günnemann

We focus on a sub-discipline of AI, namely Machine Learning (ML) and propose the delegation of data analytics and ML to the IoT edge.

1 code implementation • 6 Jul 2021 • Armin Moin, Moharram Challenger, Atta Badii, Stephan Günnemann

In particular, we implement the proposed approach, called ML-Quadrat, based on ThingML, and validate it using a case study from the IoT domain, as well as through an empirical user evaluation.

1 code implementation • 6 Jul 2021 • Armin Moin, Andrei Mituca, Moharram Challenger, Atta Badii, Stephan Günnemann

In this paper, we present ML-Quadrat, an open-source research prototype that is based on the Eclipse Modeling Framework (EMF) and the state of the art in the literature of Model-Driven Software Engineering (MDSE) for smart Cyber-Physical Systems (CPS) and the Internet of Things (IoT).

1 code implementation • 3 Jul 2021 • Sven Elflein, Bertrand Charpentier, Daniel Zügner, Stephan Günnemann

Several density estimation methods have shown to fail to detect out-of-distribution (OOD) samples by assigning higher likelihoods to anomalous data.

no code implementations • NeurIPS 2021 • Oleksandr Shchur, Ali Caner Türkmen, Tim Januschowski, Jan Gasthaus, Stephan Günnemann

Automatically detecting anomalies in event data can provide substantial value in domains such as healthcare, DevOps, and information security.

4 code implementations • NeurIPS 2021 • Johannes Gasteiger, Florian Becker, Stephan Günnemann

Effectively predicting molecular interactions has the potential to accelerate molecular dynamics by multiple orders of magnitude and thus revolutionize chemical simulations.

1 code implementation • ICLR 2022 • Bertrand Charpentier, Oliver Borchert, Daniel Zügner, Simon Geisler, Stephan Günnemann

Uncertainty awareness is crucial to develop reliable machine learning models.

no code implementations • 8 Apr 2021 • Oleksandr Shchur, Ali Caner Türkmen, Tim Januschowski, Stephan Günnemann

Temporal point processes (TPP) are probabilistic generative models for continuous-time event sequences.

1 code implementation • ICLR 2021 • Daniel Zügner, Tobias Kirschstein, Michele Catasta, Jure Leskovec, Stephan Günnemann

Source code (Context) and its parsed abstract syntax tree (AST; Structure) are two complementary representations of the same computer program.

no code implementations • ICLR 2021 • Jan Schuchardt, Aleksandar Bojchevski, Johannes Klicpera, Stephan Günnemann

In tasks like node classification, image segmentation, and named-entity recognition we have a classifier that simultaneously outputs multiple predictions (a vector of labels) based on a single input, i. e. a single graph, image, or document respectively.

no code implementations • 1 Jan 2021 • Johannes Klicpera, Marten Lienen, Stephan Günnemann

Optimal transport (OT) is a cornerstone of many machine learning tasks.

no code implementations • NeurIPS 2020 • Richard Kurle, Syama Sundar Rangapuram, Emmanuel de Bézenac, Stephan Günnemann, Jan Gasthaus

We propose a Monte Carlo objective that leverages the conditional linearity by computing the corresponding conditional expectations in closed-form and a suitable proposal distribution that is factorised similarly to the optimal proposal distribution.

6 code implementations • 28 Nov 2020 • Johannes Gasteiger, Shankari Giri, Johannes T. Margraf, Stephan Günnemann

Many important tasks in chemistry revolve around molecules during reactions.

Ranked #5 on Drug Discovery on QM9

1 code implementation • NeurIPS 2020 • Simon Geisler, Daniel Zügner, Stephan Günnemann

Perturbations targeting the graph structure have proven to be extremely effective in reducing the performance of Graph Neural Networks (GNNs), and traditional defenses such as adversarial training do not seem to be able to improve robustness.

1 code implementation • 28 Oct 2020 • Anna-Kathrin Kopetzki, Bertrand Charpentier, Daniel Zügner, Sandhya Giri, Stephan Günnemann

Dirichlet-based uncertainty (DBU) models are a recent and promising class of uncertainty-aware models.

no code implementations • 7 Oct 2020 • Marin Biloš, Stephan Günnemann

Modeling sets is an important problem in machine learning since this type of data can be found in many domains.

no code implementations • 28 Sep 2020 • Marin Biloš, Stephan Günnemann

To model this behavior, it is enough to transform the samples from the uniform process with a sufficiently complex equivariant function.

1 code implementation • 22 Sep 2020 • Armin Moin, Stephan Rössler, Marouane Sayih, Stephan Günnemann

In this paper, we illustrate how to enhance an existing state-of-the-art modeling language and tool for the Internet of Things (IoT), called ThingML, to support machine learning on the modeling level.

1 code implementation • 22 Sep 2020 • Armin Moin, Stephan Rössler, Stephan Günnemann

In this paper, we present the current position of the research project ML-Quadrat, which aims to extend the methodology, modeling language and tool support of ThingML - an open source modeling tool for IoT/CPS - to address Machine Learning needs for the IoT applications.

1 code implementation • ICML 2020 • Aleksandar Bojchevski, Johannes Gasteiger, Stephan Günnemann

Existing techniques for certifying the robustness of models for discrete data either work only for a small class of models or are general at the expense of efficiency or tightness.

no code implementations • 28 Jul 2020 • Anna-Kathrin Kopetzki, Stephan Günnemann

This principle is highly versatile, as we show.

no code implementations • 15 Jul 2020 • Nick Harmening, Marin Biloš, Stephan Günnemann

Determining the traffic scenario space is a major challenge for the homologation and coverage assessment of automated driving functions.

2 code implementations • 3 Jul 2020 • Aleksandar Bojchevski, Johannes Gasteiger, Bryan Perozzi, Amol Kapoor, Martin Blais, Benedek Rózemberczki, Michal Lukasik, Stephan Günnemann

Graph neural networks (GNNs) have emerged as a powerful approach for solving many network mining tasks.

no code implementations • 2 Jul 2020 • Marcel Hildebrandt, Hang Li, Rajat Koner, Volker Tresp, Stephan Günnemann

We propose a novel method that approaches the task by performing context-driven, sequential reasoning based on the objects and their semantic and spatial relationships present in the scene.

1 code implementation • NeurIPS 2020 • Oleksandr Shchur, Nicholas Gao, Marin Biloš, Stephan Günnemann

Temporal point process (TPP) models combined with recurrent neural networks provide a powerful framework for modeling continuous-time event data.

1 code implementation • NeurIPS 2020 • Bertrand Charpentier, Daniel Zügner, Stephan Günnemann

The posterior distributions learned by PostNet accurately reflect uncertainty for in- and out-of-distribution data -- without requiring access to OOD data at training time.

no code implementations • ICLR 2020 • Richard Kurle, Botond Cseke, Alexej Klushyn, Patrick van der Smagt, Stephan Günnemann

We represent the posterior approximation of the network weights by a diagonal Gaussian distribution and a complementary memory of raw data.

1 code implementation • AKBC 2020 • Zhen Han, Yunpu Ma, Yuyi Wang, Stephan Günnemann, Volker Tresp

The Hawkes process has become a standard method for modeling self-exciting event sequences with different event types.

4 code implementations • ICLR 2020 • Johannes Gasteiger, Janek Groß, Stephan Günnemann

Each message is associated with a direction in coordinate space.

Ranked #7 on Drug Discovery on QM9

1 code implementation • 22 Nov 2019 • Alexander Ziller, Julius Hansjakob, Vitalii Rusinov, Daniel Zügner, Peter Vogel, Stephan Günnemann

We release a realistic, diverse, and challenging dataset for object detection on images.

1 code implementation • NeurIPS 2019 • Marin Biloš, Bertrand Charpentier, Stephan Günnemann

Asynchronous event sequences are the basis of many applications throughout different industries.

1 code implementation • NeurIPS 2019 • Aleksandar Bojchevski, Stephan Günnemann

Despite the exploding interest in graph neural networks there has been little effort to verify and improve their robustness.

3 code implementations • NeurIPS 2019 • Johannes Gasteiger, Stefan Weißenberger, Stephan Günnemann

In this work, we remove the restriction of using only the direct neighbors by introducing a powerful, yet spatially localized graph convolution: Graph diffusion convolution (GDC).

Ranked #3 on Node Classification on AMZ Comp

3 code implementations • ICLR 2020 • Oleksandr Shchur, Marin Biloš, Stephan Günnemann

The standard way of learning in such models is by estimating the conditional intensity function.

1 code implementation • ICLR 2019 • Oleksandr Shchur, Stephan Günnemann

Community detection is a fundamental problem in machine learning.

1 code implementation • 28 Jun 2019 • Daniel Zügner, Stephan Günnemann

Recent works show that Graph Neural Networks (GNNs) are highly non-robust with respect to adversarial attacks on both the graph structure and the node attributes, making their outcomes unreliable.

Ranked #21 on Node Classification on Pubmed

no code implementations • ICLR 2019 • Aleksandar Bojchevski, Stephan Günnemann

The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks.

1 code implementation • ICLR 2019 • Daniel Zügner, Stephan Günnemann

Deep learning models for graphs have advanced the state of the art on many tasks.

2 code implementations • 14 Nov 2018 • Oleksandr Shchur, Maximilian Mumme, Aleksandar Bojchevski, Stephan Günnemann

We perform a thorough empirical evaluation of four prominent GNN models and show that considering different splits of the data leads to dramatically different rankings of models.

no code implementations • 11 Nov 2018 • Richard Kurle, Stephan Günnemann, Patrick van der Smagt

Learning from multiple sources of information is an important problem in machine-learning research.

2 code implementations • NeurIPS 2019 • Stephan Rabanser, Stephan Günnemann, Zachary C. Lipton

We might hope that when faced with unexpected inputs, well-designed software systems would fire off warnings.

5 code implementations • ICLR 2019 • Johannes Gasteiger, Aleksandar Bojchevski, Stephan Günnemann

We utilize this propagation procedure to construct a simple model, personalized propagation of neural predictions (PPNP), and its fast approximation, APPNP.

Ranked #1 on Node Classification on MS ACADEMIC

General Classification Node Classification on Non-Homophilic (Heterophilic) Graphs

no code implementations • 3 Oct 2018 • Roberto Alonso, Stephan Günnemann

Mining dense quasi-cliques is a well-known clustering task with applications ranging from social networks over collaboration graphs to document analysis.

1 code implementation • ICLR 2019 • Aleksandar Bojchevski, Stephan Günnemann

The goal of network representation learning is to learn low-dimensional node embeddings that capture the graph structure and are useful for solving downstream tasks.

no code implementations • 3 Jun 2018 • Federico Monti, Oleksandr Shchur, Aleksandar Bojchevski, Or Litany, Stephan Günnemann, Michael M. Bronstein

In recent years, there has been a surge of interest in developing deep learning methods for non-Euclidean structured data such as graphs.

1 code implementation • 21 May 2018 • Daniel Zügner, Amir Akbarnejad, Stephan Günnemann

Even more, our attacks are transferable: the learned attacks generalize to other state-of-the-art node classification models and unsupervised approaches, and likewise are successful even when only limited knowledge about the graph is given.

2 code implementations • ICML 2018 • Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, Stephan Günnemann

NetGAN is able to produce graphs that exhibit well-known network patterns without explicitly specifying them in the model definition.

no code implementations • ICLR 2018 • Aleksandar Bojchevski, Oleksandr Shchur, Daniel Zügner, Stephan Günnemann

Moreover, GraphGAN learns a semantic mapping from the latent input space to the generated graph's properties.

1 code implementation • 29 Nov 2017 • Stephan Rabanser, Oleksandr Shchur, Stephan Günnemann

Tensors are multidimensional arrays of numerical values and therefore generalize matrices to multiple dimensions.

1 code implementation • ICLR 2018 • Aleksandar Bojchevski, Stephan Günnemann

We propose Graph2Gauss - an approach that can efficiently learn versatile node embeddings on large scale (attributed) graphs that show strong performance on tasks such as link prediction and node classification.

1 code implementation • 27 Jun 2014 • Wolfgang Gatterbauer, Stephan Günnemann, Danai Koutra, Christos Faloutsos

Often, we can answer such questions and label nodes in a network based on the labels of their neighbors and appropriate assumptions of homophily ("birds of a feather flock together") or heterophily ("opposites attract").

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.