1 code implementation • 23 May 2024 • Jonas Spinner, Victor Bresó, Pim de Haan, Tilman Plehn, Jesse Thaler, Johann Brehmer

We propose the Lorentz Geometric Algebra Transformer (L-GATr), a new multi-purpose architecture for high-energy physics.

no code implementations • 6 Dec 2023 • Ekdeep Singh Lubana, Johann Brehmer, Pim de Haan, Taco Cohen

We explore the viability of casting foundation models as generic reward functions for reinforcement learning.

1 code implementation • 8 Nov 2023 • Pim de Haan, Taco Cohen, Johann Brehmer

The Geometric Algebra Transformer (GATr) is a versatile architecture for geometric deep learning based on projective geometric algebra.

1 code implementation • NeurIPS 2023 • Johann Brehmer, Pim de Haan, Sönke Behrends, Taco Cohen

In this paper we introduce the Geometric Algebra Transformer (GATr), a general-purpose architecture for geometric data.

1 code implementation • 26 Jan 2023 • Jonas Köhler, Michele Invernizzi, Pim de Haan, Frank Noé

Normalizing flows (NF) are a class of powerful generative models that have gained popularity in recent years due to their ability to model complex distributions with high flexibility and expressiveness.

1 code implementation • 9 Dec 2022 • Julian Suk, Pim de Haan, Phillip Lippe, Christoph Brune, Jelmer M. Wolterink

Computational fluid dynamics (CFD) is a valuable asset for patient-specific cardiovascular-disease diagnosis and prognosis, but its high computational demands hamper its adoption in practice.

no code implementations • 4 Nov 2022 • Risto Vuorio, Johann Brehmer, Hanno Ackermann, Daniel Dijkman, Taco Cohen, Pim de Haan

Standard imitation learning can fail when the expert demonstrators have different sensory inputs than the imitating agent.

1 code implementation • 1 Jul 2022 • Mathis Gerdes, Pim de Haan, Corrado Rainone, Roberto Bondesan, Miranda C. N. Cheng

We propose a novel machine learning method for sampling from the high-dimensional probability distributions of Lattice Field Theories, which is based on a single neural ODE layer and incorporates the full symmetries of the problem.

no code implementations • 30 Mar 2022 • Johann Brehmer, Pim de Haan, Phillip Lippe, Taco Cohen

Learning high-level causal representations together with a causal model from unstructured low-level data such as pixels is impossible from observational data alone.

no code implementations • 6 Oct 2021 • Pim de Haan, Corrado Rainone, Miranda C. N. Cheng, Roberto Bondesan

We propose a continuous normalizing flow for sampling from the high-dimensional probability distributions of Quantum Field Theories in Physics.

1 code implementation • 10 Sep 2021 • Julian Suk, Pim de Haan, Phillip Lippe, Christoph Brune, Jelmer M. Wolterink

In this work, we propose to instead use mesh convolutional neural networks that directly operate on the same finite-element surface mesh as used in CFD.

no code implementations • NeurIPS 2020 • Pim de Haan, Taco Cohen, Max Welling

A key requirement for graph neural networks is that they must process a graph in a way that does not depend on how the graph is described.

1 code implementation • ICLR 2021 • Pim de Haan, Maurice Weiler, Taco Cohen, Max Welling

A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs).

no code implementations • 25 Sep 2019 • Berkay Kicanaoglu, Pim de Haan, Taco Cohen

Spherical CNNs are convolutional neural networks that can process signals on the sphere, such as global climate and weather patterns or omnidirectional images.

no code implementations • 6 Jun 2019 • Miranda C. N. Cheng, Vassilis Anagiannis, Maurice Weiler, Pim de Haan, Taco S. Cohen, Max Welling

In this proceeding we give an overview of the idea of covariance (or equivariance) featured in the recent development of convolutional neural networks (CNNs).

2 code implementations • NeurIPS 2019 • Pim de Haan, Dinesh Jayaraman, Sergey Levine

Such discriminative models are non-causal: the training procedure is unaware of the causal structure of the interaction between the expert and the environment.

1 code implementation • 7 Mar 2019 • Luca Falorsi, Pim de Haan, Tim R. Davidson, Patrick Forré

Unfortunately, this research has primarily focused on distributions defined in Euclidean space, ruling out the usage of one of the most influential class of spaces with non-trivial topologies: Lie groups.

no code implementations • 27 Dec 2018 • Pim de Haan, Luca Falorsi

When doing representation learning on data that lives on a known non-trivial manifold embedded in high dimensional space, it is natural to desire the encoder to be homeomorphic when restricted to the manifold, so that it is bijective and continuous with a continuous inverse.

1 code implementation • 12 Jul 2018 • Luca Falorsi, Pim de Haan, Tim R. Davidson, Nicola De Cao, Maurice Weiler, Patrick Forré, Taco S. Cohen

Our experiments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.