Rotated MNIST
18 papers with code • 1 benchmarks • 1 datasets
Latest papers with no code
Progressive Conservative Adaptation for Evolving Target Domains
Moreover, as adjusting to the most recent target domain can interfere with the features learned from previous target domains, we develop a conservative sparse attention mechanism.
Scale-Rotation-Equivariant Lie Group Convolution Neural Networks (Lie Group-CNNs)
In addition, the generalization ability of the Lie group-CNN on SIM(2) on rotation-equivariance is verified on rotated-MNIST and rotated-CIFAR10, and the robustness of the network is verified on SO(2) and SE(2).
Group Invariant Global Pooling
Much work has been devoted to devising architectures that build group-equivariant representations, while invariance is often induced using simple global pooling mechanisms.
Diversity Boosted Learning for Domain Generalization with Large Number of Domains
Machine learning algorithms minimizing the average training loss usually suffer from poor generalization performance due to the greedy exploitation of correlations among the training data, which are not stable under distributional shifts.
Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge
Domain generalization aims at learning a universal model that performs well on unseen target domains, incorporating knowledge from multiple source domains.
Transform-Invariant Convolutional Neural Networks for Image Classification and Search
In particular, the conventional objective (cost) function employed during the training process of a VAE both quantifies the agreement between the input and output data records and ensures that the latent space representation of the input data record is statistically generated with an appropriate mean and standard deviation.
Improving the Sample-Complexity of Deep Classification Networks with Invariant Integration
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets where rotation-invariant-integration-based Wide-ResNet architectures using monomials and weighted sums outperform the respective baselines in the limited sample regime.
Learning Augmentation Distributions using Transformed Risk Minimization
We propose a new \emph{Transformed Risk Minimization} (TRM) framework as an extension of classical risk minimization.
Learning Rotation Invariant Features for Cryogenic Electron Microscopy Image Reconstruction
A fundamental step in the recovering of the 3D single-particle structure is to align its 2D projections; thus, the construction of a canonical representation with a fixed rotation angle is required.
Invariant Integration in Deep Convolutional Feature Space
In this contribution, we show how to incorporate prior knowledge to a deep neural network architecture in a principled manner.