Rotated MNIST
18 papers with code • 1 benchmarks • 1 datasets
Latest papers
Artificial Neuronal Ensembles with Learned Context Dependent Gating
Finally, there is a regularization term responsible for ensuring that new tasks are encoded in gates that are as orthogonal as possible from previously used ones.
Learning unfolded networks with a cyclic group structure
Deep neural networks lack straightforward ways to incorporate domain knowledge and are notoriously considered black boxes.
Learning Invariant Representations for Equivariant Neural Networks Using Orthogonal Moments
The final classification layer in equivariant neural networks is invariant to different affine geometric transformations such as rotation, reflection and translation, and the scalar value is obtained by either eliminating the spatial dimensions of filter responses using convolution and down-sampling throughout the network or average is taken over the filter responses.
Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups
In addition, thanks to the increase in computational efficiency, we are able to implement G-CNNs equivariant to the $\mathrm{Sim(2)}$ group; the group of dilations, rotations and translations.
Learning Partial Equivariances from Data
Frequently, transformations occurring in data can be better represented by a subset of a group than by a group as a whole, e. g., rotations in $[-90^{\circ}, 90^{\circ}]$.
Equivariance-bridged SO(2)-Invariant Representation Learning using Graph Convolutional Network
Training a Convolutional Neural Network (CNN) to be robust against rotation has mostly been done with data augmentation.
CyCNN: A Rotation Invariant CNN using Polar Mapping and Cylindrical Convolution Layers
Deep Convolutional Neural Networks (CNNs) are empirically known to be invariant to moderate translation but not to rotation in image classification.
PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions
In implementation, we discretize the system using the numerical schemes of PDOs, deriving approximately equivariant convolutions (PDO-eConvs).
Domain Generalization using Causal Matching
In the domain generalization literature, a common objective is to learn representations independent of the domain after conditioning on the class label.
Efficient Domain Generalization via Common-Specific Low-Rank Decomposition
The domain specific components are discarded after training and only the common component is retained.