Rotated MNIST

13 papers with code • 1 benchmarks • 2 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

DIVA: Domain Invariant Variational Autoencoders

AMLab-Amsterdam/DIVA 24 May 2019

We consider the problem of domain generalization, namely, how to learn representations given data from a set of domains that generalize to data from a previously unseen domain.

PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions

shenzy08/PDO-eConvs ICML 2020

In implementation, we discretize the system using the numerical schemes of PDOs, deriving approximately equivariant convolutions (PDO-eConvs).

Deep Rotation Equivariant Network

microljy/DREN_Tensorflow 24 May 2017

Recently, learning equivariant representations has attracted considerable research attention.

Equivariance-bridged SO(2)-Invariant Representation Learning using Graph Convolutional Network

deepshwang/swn_gcn 18 Jun 2021

Training a Convolutional Neural Network (CNN) to be robust against rotation has mostly been done with data augmentation.

Group Equivariant Convolutional Networks

adambielski/pytorch-gconv-experiments 24 Feb 2016

We introduce Group equivariant Convolutional Neural Networks (G-CNNs), a natural generalization of convolutional neural networks that reduces sample complexity by exploiting symmetries.

Polar Transformer Networks

daniilidis-group/polar-transformer-networks ICLR 2018

The result is a network invariant to translation and equivariant to both rotation and scale.

CapsGAN: Using Dynamic Routing for Generative Adversarial Networks

raeidsaqur/CapsGAN 7 Jun 2018

We show that CapsGAN performs better than or equal to traditional CNN based GANs in generating images with high geometric transformations using rotated MNIST.

General E(2)-Equivariant Steerable CNNs

QUVA-Lab/e2cnn NeurIPS 2019

Here we give a general description of E(2)-equivariant convolutions in the framework of Steerable CNNs.

Efficient Domain Generalization via Common-Specific Low-Rank Decomposition

vihari/csd ICML 2020

The domain specific components are discarded after training and only the common component is retained.