# Domain Generalization

276 papers with code • 16 benchmarks • 19 datasets

The idea of **Domain Generalization** is to learn from one or multiple training domains, to extract a domain-agnostic model which can be applied to an unseen domain

Source: Diagram Image Retrieval using Sketch-Based Deep Learning and Transfer Learning

## Libraries

Use these libraries to find Domain Generalization models and implementations## Datasets

## Most implemented papers

# mixup: Beyond Empirical Risk Minimization

We also find that mixup reduces the memorization of corrupt labels, increases the robustness to adversarial examples, and stabilizes the training of generative adversarial networks.

# Domain-Adversarial Training of Neural Networks

Our approach is directly inspired by the theory on domain adaptation suggesting that, for effective domain transfer to be achieved, predictions must be made based on features that cannot discriminate between the training (source) and test (target) domains.

# CutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features

Regional dropout strategies have been proposed to enhance the performance of convolutional neural network classifiers.

# Masked Autoencoders Are Scalable Vision Learners

Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.

# A ConvNet for the 2020s

The "Roaring 20s" of visual recognition began with the introduction of Vision Transformers (ViTs), which quickly superseded ConvNets as the state-of-the-art image classification model.

# Improved Regularization of Convolutional Neural Networks with Cutout

Convolutional neural networks are capable of learning powerful representational spaces, which are necessary for tackling complex learning tasks.

# AugMix: A Simple Data Processing Method to Improve Robustness and Uncertainty

We propose AugMix, a data processing technique that is simple to implement, adds limited computational overhead, and helps models withstand unforeseen corruptions.

# Invariant Risk Minimization

We introduce Invariant Risk Minimization (IRM), a learning paradigm to estimate invariant correlations across multiple training distributions.

# Benchmarking Neural Network Robustness to Common Corruptions and Perturbations

Then we propose a new dataset called ImageNet-P which enables researchers to benchmark a classifier's robustness to common perturbations.

# Deep CORAL: Correlation Alignment for Deep Domain Adaptation

CORAL is a "frustratingly easy" unsupervised domain adaptation method that aligns the second-order statistics of the source and target distributions with a linear transformation.