Search Results for author: Pim de Haan

Found 18 papers, 10 papers with code

FoMo Rewards: Can we cast foundation models as reward functions?

no code implementations6 Dec 2023 Ekdeep Singh Lubana, Johann Brehmer, Pim de Haan, Taco Cohen

We explore the viability of casting foundation models as generic reward functions for reinforcement learning.

Language Modelling Large Language Model

Euclidean, Projective, Conformal: Choosing a Geometric Algebra for Equivariant Transformers

1 code implementation8 Nov 2023 Pim de Haan, Taco Cohen, Johann Brehmer

The Geometric Algebra Transformer (GATr) is a versatile architecture for geometric deep learning based on projective geometric algebra.

Geometric Algebra Transformer

1 code implementation NeurIPS 2023 Johann Brehmer, Pim de Haan, Sönke Behrends, Taco Cohen

In this paper we introduce the Geometric Algebra Transformer (GATr), a general-purpose architecture for geometric data.

Motion Planning

Rigid Body Flows for Sampling Molecular Crystal Structures

1 code implementation26 Jan 2023 Jonas Köhler, Michele Invernizzi, Pim de Haan, Frank Noé

Normalizing flows (NF) are a class of powerful generative models that have gained popularity in recent years due to their ability to model complex distributions with high flexibility and expressiveness.

Variational Inference

Mesh Neural Networks for SE(3)-Equivariant Hemodynamics Estimation on the Artery Wall

1 code implementation9 Dec 2022 Julian Suk, Pim de Haan, Phillip Lippe, Christoph Brune, Jelmer M. Wolterink

Computational fluid dynamics (CFD) is a valuable asset for patient-specific cardiovascular-disease diagnosis and prognosis, but its high computational demands hamper its adoption in practice.

Deconfounded Imitation Learning

no code implementations4 Nov 2022 Risto Vuorio, Johann Brehmer, Hanno Ackermann, Daniel Dijkman, Taco Cohen, Pim de Haan

Standard imitation learning can fail when the expert demonstrators have different sensory inputs than the imitating agent.

Imitation Learning

Learning Lattice Quantum Field Theories with Equivariant Continuous Flows

1 code implementation1 Jul 2022 Mathis Gerdes, Pim de Haan, Corrado Rainone, Roberto Bondesan, Miranda C. N. Cheng

We propose a novel machine learning method for sampling from the high-dimensional probability distributions of Lattice Field Theories, which is based on a single neural ODE layer and incorporates the full symmetries of the problem.

BIG-bench Machine Learning

Weakly supervised causal representation learning

no code implementations30 Mar 2022 Johann Brehmer, Pim de Haan, Phillip Lippe, Taco Cohen

Learning high-level causal representations together with a causal model from unstructured low-level data such as pixels is impossible from observational data alone.

Representation Learning

Scaling Up Machine Learning For Quantum Field Theory with Equivariant Continuous Flows

no code implementations6 Oct 2021 Pim de Haan, Corrado Rainone, Miranda C. N. Cheng, Roberto Bondesan

We propose a continuous normalizing flow for sampling from the high-dimensional probability distributions of Quantum Field Theories in Physics.

BIG-bench Machine Learning

Mesh convolutional neural networks for wall shear stress estimation in 3D artery models

1 code implementation10 Sep 2021 Julian Suk, Pim de Haan, Phillip Lippe, Christoph Brune, Jelmer M. Wolterink

In this work, we propose to instead use mesh convolutional neural networks that directly operate on the same finite-element surface mesh as used in CFD.

Natural Graph Networks

no code implementations NeurIPS 2020 Pim de Haan, Taco Cohen, Max Welling

A key requirement for graph neural networks is that they must process a graph in a way that does not depend on how the graph is described.

Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs

1 code implementation ICLR 2021 Pim de Haan, Maurice Weiler, Taco Cohen, Max Welling

A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs).

Gauge Equivariant Spherical CNNs

no code implementations25 Sep 2019 Berkay Kicanaoglu, Pim de Haan, Taco Cohen

Spherical CNNs are convolutional neural networks that can process signals on the sphere, such as global climate and weather patterns or omnidirectional images.

Semantic Segmentation

Covariance in Physics and Convolutional Neural Networks

no code implementations6 Jun 2019 Miranda C. N. Cheng, Vassilis Anagiannis, Maurice Weiler, Pim de Haan, Taco S. Cohen, Max Welling

In this proceeding we give an overview of the idea of covariance (or equivariance) featured in the recent development of convolutional neural networks (CNNs).

Causal Confusion in Imitation Learning

2 code implementations NeurIPS 2019 Pim de Haan, Dinesh Jayaraman, Sergey Levine

Such discriminative models are non-causal: the training procedure is unaware of the causal structure of the interaction between the expert and the environment.

Imitation Learning

Reparameterizing Distributions on Lie Groups

1 code implementation7 Mar 2019 Luca Falorsi, Pim de Haan, Tim R. Davidson, Patrick Forré

Unfortunately, this research has primarily focused on distributions defined in Euclidean space, ruling out the usage of one of the most influential class of spaces with non-trivial topologies: Lie groups.

Pose Estimation

Topological Constraints on Homeomorphic Auto-Encoding

no code implementations27 Dec 2018 Pim de Haan, Luca Falorsi

When doing representation learning on data that lives on a known non-trivial manifold embedded in high dimensional space, it is natural to desire the encoder to be homeomorphic when restricted to the manifold, so that it is bijective and continuous with a continuous inverse.

Representation Learning

Explorations in Homeomorphic Variational Auto-Encoding

1 code implementation12 Jul 2018 Luca Falorsi, Pim de Haan, Tim R. Davidson, Nicola De Cao, Maurice Weiler, Patrick Forré, Taco S. Cohen

Our experiments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space.

Cannot find the paper you are looking for? You can Submit a new open access paper.