In deep learning, however, these maps are usually defined by convolutions with a kernel, whereas they are partial differential operators (PDOs) in physics.
We argue that the particular choice of coordinatization should not affect a network's inference -- it should be coordinate independent.
Group equivariant convolutional networks (GCNNs) endow classical convolutional networks with additional symmetry priors, which can lead to a considerably improved performance.
In this proceeding we give an overview of the idea of covariance (or equivariance) featured in the recent development of convolutional neural networks (CNNs).
The principle of equivariance to symmetry transformations enables a theoretically grounded approach to neural network architecture design.
Feature maps in these networks represent fields on a homogeneous base space, and layers are equivariant maps between spaces of fields.
Our experiments show that choosing manifold-valued latent variables that match the topology of the latent data manifold, is crucial to preserve the topological structure and learn a well-behaved latent space.
We prove that equivariant convolutions are the most general equivariant linear maps between fields over R^3.
In algebraic terms, the feature spaces in regular G-CNNs transform according to a regular representation of the group G, whereas the feature spaces in Steerable G-CNNs transform according to the more general induced representations of G. In order to make the network equivariant, each layer in a G-CNN is required to intertwine between the induced representations associated with its input and output space.
In many machine learning tasks it is desirable that a model's prediction transforms in an equivariant way under transformations of its input.
Ranked #2 on Breast Tumour Classification on PCam