Search Results for author: Johannes Brandstetter

Found 30 papers, 21 papers with code

GNN-VPA: A Variance-Preserving Aggregation Strategy for Graph Neural Networks

1 code implementation7 Mar 2024 Lisa Schneckenreiter, Richard Freinschlag, Florian Sestak, Johannes Brandstetter, Günter Klambauer, Andreas Mayr

Graph neural networks (GNNs), and especially message-passing neural networks, excel in various domains such as physics, drug discovery, and molecular modeling.

Drug Discovery

JAX-SPH: A Differentiable Smoothed Particle Hydrodynamics Framework

1 code implementation7 Mar 2024 Artur P. Toshev, Harish Ramachandran, Jonas A. Erbesdobler, Gianluca Galletti, Johannes Brandstetter, Nikolaus A. Adams

Particle-based fluid simulations have emerged as a powerful tool for solving the Navier-Stokes equations, especially in cases that include intricate physics and free surfaces.

Clifford-Steerable Convolutional Neural Networks

1 code implementation22 Feb 2024 Maksim Zhdanov, David Ruhe, Maurice Weiler, Ana Lucic, Johannes Brandstetter, Patrick Forré

We present Clifford-Steerable Convolutional Neural Networks (CS-CNNs), a novel class of $\mathrm{E}(p, q)$-equivariant CNNs.

Geometry-Informed Neural Networks

no code implementations21 Feb 2024 Arturs Berzins, Andreas Radler, Sebastian Sanokowski, Sepp Hochreiter, Johannes Brandstetter

We introduce the concept of geometry-informed neural networks (GINNs), which encompass (i) learning under geometric constraints, (ii) neural fields as a suitable representation, and (iii) generating diverse solutions to under-determined systems often encountered in geometric tasks.

Universal Physics Transformers

no code implementations19 Feb 2024 Benedikt Alkin, Andreas Fürst, Simon Schmid, Lukas Gruber, Markus Holzleitner, Johannes Brandstetter

Deep neural network based surrogates for partial differential equations have recently gained increased interest.

MIM-Refiner: A Contrastive Learning Boost from Intermediate Pre-Trained Representations

1 code implementation15 Feb 2024 Benedikt Alkin, Lukas Miklautz, Sepp Hochreiter, Johannes Brandstetter

The motivation behind MIM-Refiner is rooted in the insight that optimal representations within MIM models generally reside in intermediate layers.

Contrastive Learning Image Clustering +1

Neural SPH: Improved Neural Modeling of Lagrangian Fluid Dynamics

2 code implementations9 Feb 2024 Artur P. Toshev, Jonas A. Erbesdobler, Nikolaus A. Adams, Johannes Brandstetter

Smoothed particle hydrodynamics (SPH) is omnipresent in modern engineering and scientific disciplines.

Lie Point Symmetry and Physics Informed Networks

no code implementations7 Nov 2023 Tara Akhound-Sadegh, Laurence Perreault-Levasseur, Johannes Brandstetter, Max Welling, Siamak Ravanbakhsh

Symmetries have been leveraged to improve the generalization of neural networks through different mechanisms from data augmentation to equivariant architectures.

Data Augmentation Inductive Bias

Learning Lagrangian Fluid Mechanics with E($3$)-Equivariant Graph Neural Networks

2 code implementations24 May 2023 Artur P. Toshev, Gianluca Galletti, Johannes Brandstetter, Stefan Adami, Nikolaus A. Adams

We contribute to the vastly growing field of machine learning for engineering systems by demonstrating that equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models than their non-equivariant counterparts.

E($3$) Equivariant Graph Neural Networks for Particle-Based Fluid Mechanics

no code implementations31 Mar 2023 Artur P. Toshev, Gianluca Galletti, Johannes Brandstetter, Stefan Adami, Nikolaus A. Adams

We contribute to the vastly growing field of machine learning for engineering systems by demonstrating that equivariant graph neural networks have the potential to learn more accurate dynamic-interaction models than their non-equivariant counterparts.

G-Signatures: Global Graph Propagation With Randomized Signatures

no code implementations17 Feb 2023 Bernhard Schäfl, Lukas Gruber, Johannes Brandstetter, Sepp Hochreiter

Graph neural networks (GNNs) have evolved into one of the most popular deep learning architectures.

Graph Learning

Geometric Clifford Algebra Networks

1 code implementation13 Feb 2023 David Ruhe, Jayesh K. Gupta, Steven de Keninck, Max Welling, Johannes Brandstetter

GCANs are based on symmetry group transformations using geometric (Clifford) algebras.

ClimaX: A foundation model for weather and climate

1 code implementation24 Jan 2023 Tung Nguyen, Johannes Brandstetter, Ashish Kapoor, Jayesh K. Gupta, Aditya Grover

We develop and demonstrate ClimaX, a flexible and generalizable deep learning model for weather and climate science that can be trained using heterogeneous datasets spanning different variables, spatio-temporal coverage, and physical groundings.

Self-Supervised Learning Weather Forecasting

Towards Multi-spatiotemporal-scale Generalized PDE Modeling

1 code implementation30 Sep 2022 Jayesh K. Gupta, Johannes Brandstetter

Finally, we show promising results on generalization to different PDE parameters and time-scales with a single surrogate model.

PDE Surrogate Modeling

Clifford Neural Layers for PDE Modeling

1 code implementation8 Sep 2022 Johannes Brandstetter, Rianne van den Berg, Max Welling, Jayesh K. Gupta

We empirically evaluate the benefit of Clifford neural layers by replacing convolution and Fourier operations in common neural PDE surrogates by their Clifford counterparts on 2D Navier-Stokes and weather modeling tasks, as well as 3D Maxwell equations.

Weather Forecasting

Few-Shot Learning by Dimensionality Reduction in Gradient Space

1 code implementation7 Jun 2022 Martin Gauch, Maximilian Beck, Thomas Adler, Dmytro Kotsur, Stefan Fiel, Hamid Eghbal-zadeh, Johannes Brandstetter, Johannes Kofler, Markus Holzleitner, Werner Zellinger, Daniel Klotz, Sepp Hochreiter, Sebastian Lehner

We introduce SubGD, a novel few-shot learning method which is based on the recent finding that stochastic gradient descent updates tend to live in a low-dimensional parameter subspace.

Dimensionality Reduction Few-Shot Learning

Lie Point Symmetry Data Augmentation for Neural PDE Solvers

1 code implementation15 Feb 2022 Johannes Brandstetter, Max Welling, Daniel E. Worrall

In this paper, we present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity -- Lie point symmetry data augmentation (LPSDA).

Data Augmentation

Message Passing Neural PDE Solvers

1 code implementation ICLR 2022 Johannes Brandstetter, Daniel Worrall, Max Welling

The numerical solution of partial differential equations (PDEs) is difficult, having led to a century of research so far.

Domain Adaptation

Geometric and Physical Quantities Improve E(3) Equivariant Message Passing

2 code implementations ICLR 2022 Johannes Brandstetter, Rob Hesselink, Elise van der Pol, Erik J Bekkers, Max Welling

Including covariant information, such as position, force, velocity or spin is important in many tasks in computational physics and chemistry.

Boundary Graph Neural Networks for 3D Simulations

1 code implementation21 Jun 2021 Andreas Mayr, Sebastian Lehner, Arno Mayrhofer, Christoph Kloss, Sepp Hochreiter, Johannes Brandstetter

However, it is notoriously difficult to integrate them into machine learning approaches due to their heterogeneity with respect to size and orientation.

Computational Efficiency

Learning 3D Granular Flow Simulations

1 code implementation4 May 2021 Andreas Mayr, Sebastian Lehner, Arno Mayrhofer, Christoph Kloss, Sepp Hochreiter, Johannes Brandstetter

Recently, the application of machine learning models has gained momentum in natural sciences and engineering, which is a natural fit due to the abundance of data in these fields.

BIG-bench Machine Learning

Convergence Proof for Actor-Critic Methods Applied to PPO and RUDDER

no code implementations2 Dec 2020 Markus Holzleitner, Lukas Gruber, José Arjona-Medina, Johannes Brandstetter, Sepp Hochreiter

We prove under commonly used assumptions the convergence of actor-critic reinforcement learning algorithms, which simultaneously learn a policy function, the actor, and a value function, the critic.

Reinforcement Learning (RL) valid

Cross-Domain Few-Shot Learning by Representation Fusion

2 code implementations13 Oct 2020 Thomas Adler, Johannes Brandstetter, Michael Widrich, Andreas Mayr, David Kreil, Michael Kopp, Günter Klambauer, Sepp Hochreiter

On the few-shot datasets miniImagenet and tieredImagenet with small domain shifts, CHEF is competitive with state-of-the-art methods.

cross-domain few-shot learning Drug Discovery

A GAN based solver of black-box inverse problems

no code implementations NeurIPS Workshop Deep_Invers 2019 Michael Gillhofer, Hubert Ramsauer, Johannes Brandstetter, Bernhard Schäfl, Sepp Hochreiter

We propose a GAN based approach to solve inverse problems which have non-differential or non-continuous forward relations.

Cannot find the paper you are looking for? You can Submit a new open access paper.