73 papers with code • 0 benchmarks • 0 datasets
These leaderboards are used to track progress in Out-of-Distribution Generalization
LibrariesUse these libraries to find Out-of-Distribution Generalization models and implementations
We present the Open Graph Benchmark (OGB), a diverse set of challenging and realistic benchmark datasets to facilitate scalable, robust, and reproducible graph machine learning (ML) research.
Experiments across four datasets show that these model-dependent measures reveal three distinct regions in the data map, each with pronounced characteristics.
Distributional shift is one of the major obstacles when transferring machine learning prediction systems from the lab to the real world.
We present a new supervised image classification method applicable to a broad class of image deformation models.
Progress in the field of machine learning has been fueled by the introduction of benchmark datasets pushing the limits of existing algorithms.
In this paper, we present MUTANT, a training paradigm that exposes the model to perceptually similar, yet semantically distinct mutations of the input, to improve OOD generalization, such as the VQA-CP challenge.
Graph Convolution for Semi-Supervised Classification: Improved Linear Separability and Out-of-Distribution Generalization
Recently there has been increased interest in semi-supervised classification in the presence of graphical information.
We then prove that our transferability can be estimated with enough samples and give a new upper bound for the target error based on our transferability.