Search Results for author: Maurice Weiler

Found 17 papers, 11 papers with code

Clifford-Steerable Convolutional Neural Networks

1 code implementation22 Feb 2024 Maksim Zhdanov, David Ruhe, Maurice Weiler, Ana Lucic, Johannes Brandstetter, Patrick Forré

We present Clifford-Steerable Convolutional Neural Networks (CS-CNNs), a novel class of $\mathrm{E}(p, q)$-equivariant CNNs.

Hyperbolic Convolutional Neural Networks

no code implementations29 Aug 2023 Andrii Skliar, Maurice Weiler

However, no papers have yet suggested a general approach to using Hyperbolic Convolutional Neural Networks for structured data processing, although these are the most common examples of data used.

Explainable Models Image Classification +1

A Program to Build E(N)-Equivariant Steerable CNNs

no code implementations ICLR 2022 Gabriele Cesa, Leon Lang, Maurice Weiler

This enables us to directly parameterize filters in terms of a band-limited basis on the base space, but also to easily implement steerable CNNs equivariant to a large number of groups.

Steerable Partial Differential Operators for Equivariant Neural Networks

4 code implementations ICLR 2022 Erik Jenner, Maurice Weiler

In deep learning, however, these maps are usually defined by convolutions with a kernel, whereas they are partial differential operators (PDOs) in physics.

Coordinate Independent Convolutional Networks -- Isometry and Gauge Equivariant Convolutions on Riemannian Manifolds

1 code implementation10 Jun 2021 Maurice Weiler, Patrick Forré, Erik Verlinde, Max Welling

We argue that the particular choice of coordinatization should not affect a network's inference -- it should be coordinate independent.

A Wigner-Eckart Theorem for Group Equivariant Convolution Kernels

no code implementations ICLR 2021 Leon Lang, Maurice Weiler

Group equivariant convolutional networks (GCNNs) endow classical convolutional networks with additional symmetry priors, which can lead to a considerably improved performance.

Gauge Equivariant Mesh CNNs: Anisotropic convolutions on geometric graphs

1 code implementation ICLR 2021 Pim de Haan, Maurice Weiler, Taco Cohen, Max Welling

A common approach to define convolutions on meshes is to interpret them as a graph and apply graph convolutional networks (GCNs).

General E(2)-Equivariant Steerable CNNs

1 code implementation NeurIPS 2019 Maurice Weiler, Gabriele Cesa

Here we give a general description of E(2)-equivariant convolutions in the framework of Steerable CNNs.

Rotated MNIST

General $E(2)$-Equivariant Steerable CNNs

7 code implementations19 Nov 2019 Maurice Weiler, Gabriele Cesa

Here we give a general description of $E(2)$-equivariant convolutions in the framework of Steerable CNNs.

Image Classification

Covariance in Physics and Convolutional Neural Networks

no code implementations6 Jun 2019 Miranda C. N. Cheng, Vassilis Anagiannis, Maurice Weiler, Pim de Haan, Taco S. Cohen, Max Welling

In this proceeding we give an overview of the idea of covariance (or equivariance) featured in the recent development of convolutional neural networks (CNNs).

Gauge Equivariant Convolutional Networks and the Icosahedral CNN

2 code implementations11 Feb 2019 Taco S. Cohen, Maurice Weiler, Berkay Kicanaoglu, Max Welling

The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design.

Semantic Segmentation

A General Theory of Equivariant CNNs on Homogeneous Spaces

no code implementations NeurIPS 2019 Taco Cohen, Mario Geiger, Maurice Weiler

Feature maps in these networks represent fields on a homogeneous base space, and layers are equivariant maps between spaces of fields.

General Classification

Explorations in Homeomorphic Variational Auto-Encoding

1 code implementation12 Jul 2018 Luca Falorsi, Pim de Haan, Tim R. Davidson, Nicola De Cao, Maurice Weiler, Patrick Forré, Taco S. Cohen

Our experiments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space.

Intertwiners between Induced Representations (with Applications to the Theory of Equivariant Neural Networks)

1 code implementation28 Mar 2018 Taco S. Cohen, Mario Geiger, Maurice Weiler

In algebraic terms, the feature spaces in regular G-CNNs transform according to a regular representation of the group G, whereas the feature spaces in Steerable G-CNNs transform according to the more general induced representations of G. In order to make the network equivariant, each layer in a G-CNN is required to intertwine between the induced representations associated with its input and output space.

Cannot find the paper you are looking for? You can Submit a new open access paper.