Domain Generalization

276 papers with code • 16 benchmarks • 19 datasets

The idea of Domain Generalization is to learn from one or multiple training domains, to extract a domain-agnostic model which can be applied to an unseen domain

Source: Diagram Image Retrieval using Sketch-Based Deep Learning and Transfer Learning

Libraries

Use these libraries to find Domain Generalization models and implementations

Most implemented papers

mixup: Beyond Empirical Risk Minimization

facebookresearch/mixup-cifar10 ICLR 2018

We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.

Domain-Adversarial Training of Neural Networks

PaddlePaddle/PaddleSpeech 28 May 2015

Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.

CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features

clovaai/CutMix-PyTorch ICCV 2019

Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers.

Masked Autoencoders Are Scalable Vision Learners

facebookresearch/mae CVPR 2022

Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.

A ConvNet for the 2020s

facebookresearch/ConvNeXt CVPR 2022

The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model.

Improved Regularization of Convolutional Neural Networks with Cutout

uoguelph-mlrg/Cutout 15 Aug 2017

Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks.

AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty

google-research/augmix ICLR 2020

We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions.

Invariant Risk Minimization

facebookresearch/InvariantRiskMinimization 5 Jul 2019

We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions.

Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

hendrycks/robustness ICLR 2019

Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.

Deep CORAL: Correlation Alignment for Deep Domain Adaptation

thuml/Transfer-Learning-Library 6 Jul 2016

CORAL is a "frustratingly easy" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation.