Search Results for author: David W. Romero

Found 14 papers, 10 papers with code

Self-Supervised Detection of Perfect and Partial Input-Dependent Symmetries

1 code implementation19 Dec 2023 Alonso Urbano, David W. Romero

Group equivariance ensures consistent responses to group transformations of the input, leading to more robust models and enhanced generalization capabilities.

Learned Gridification for Efficient Point Cloud Processing

no code implementations22 Jul 2023 Putri A. van der Linden, David W. Romero, Erik J. Bekkers

As a result, operations that rely on neighborhood information scale much worse for point clouds than for grid data, specially for large inputs and large neighborhoods.

DNArch: Learning Convolutional Neural Architectures by Backpropagation

no code implementations10 Feb 2023 David W. Romero, Neil Zeghidour

We present Differentiable Neural Architectures (DNArch), a method that jointly learns the weights and the architecture of Convolutional Neural Networks (CNNs) by backpropagation.

Modelling Long Range Dependencies in $N$D: From Task-Specific to a General Purpose CNN

1 code implementation25 Jan 2023 David M. Knigge, David W. Romero, Albert Gu, Efstratios Gavves, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn, Jan-Jakob Sonke

Performant Convolutional Neural Network (CNN) architectures must be tailored to specific tasks in order to consider the length, resolution, and dimensionality of the input data.

Towards a General Purpose CNN for Long Range Dependencies in $N$D

1 code implementation7 Jun 2022 David W. Romero, David M. Knigge, Albert Gu, Erik J. Bekkers, Efstratios Gavves, Jakub M. Tomczak, Mark Hoogendoorn

The use of Convolutional Neural Networks (CNNs) is widespread in Deep Learning due to a range of desirable model properties which result in an efficient and effective machine learning framework.

Relaxing Equivariance Constraints with Non-stationary Continuous Filters

no code implementations14 Apr 2022 Tycho F. A. van der Ouderaa, David W. Romero, Mark van der Wilk

Equivariances provide useful inductive biases in neural network modeling, with the translation equivariance of convolutional neural networks being a canonical example.

Image Classification

Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups

1 code implementation25 Oct 2021 David M. Knigge, David W. Romero, Erik J. Bekkers

In addition, thanks to the increase in computational efficiency, we are able to implement G-CNNs equivariant to the $\mathrm{Sim(2)}$ group; the group of dilations, rotations and translations.

Computational Efficiency Rotated MNIST

Learning Partial Equivariances from Data

1 code implementation19 Oct 2021 David W. Romero, Suhas Lohit

Frequently, transformations occurring in data can be better represented by a subset of a group than by a group as a whole, e. g., rotations in $[-90^{\circ}, 90^{\circ}]$.

Image Classification Rotated MNIST

Group Equivariant Stand-Alone Self-Attention For Vision

1 code implementation ICLR 2021 David W. Romero, Jean-Baptiste Cordonnier

We provide a general self-attention formulation to impose group equivariance to arbitrary symmetry groups.

Wavelet Networks: Scale-Translation Equivariant Learning From Raw Time-Series

1 code implementation9 Jun 2020 David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn

In this work, we fill this gap by leveraging the symmetries inherent to time-series for the construction of equivariant neural network.

Descriptive Time Series +2

Attentive Group Equivariant Convolutional Networks

1 code implementation ICML 2020 David W. Romero, Erik J. Bekkers, Jakub M. Tomczak, Mark Hoogendoorn

Although group convolutional networks are able to learn powerful representations based on symmetry patterns, they lack explicit means to learn meaningful relationships among them (e. g., relative positions and poses).

Co-Attentive Equivariant Neural Networks: Focusing Equivariance On Transformations Co-Occurring In Data

no code implementations ICLR 2020 David W. Romero, Mark Hoogendoorn

Equivariance is a nice property to have as it produces much more parameter efficient neural architectures and preserves the structure of the input through the feature mapping.

Object Recognition Rotated MNIST

Cannot find the paper you are looking for? You can Submit a new open access paper.