Generalization Bounds

118 papers with code • 0 benchmarks • 0 datasets

This task has no description! Would you like to contribute one?

Most implemented papers

Bridging Theory and Algorithm for Domain Adaptation

thuml/MDD 11 Apr 2019

We introduce Margin Disparity Discrepancy, a novel measurement with rigorous generalization bounds, tailored to the distribution comparison with the asymmetric margin loss, and to the minimax optimization for easier training.

SWAD: Domain Generalization by Seeking Flat Minima

khanrc/swad NeurIPS 2021

Domain generalization (DG) methods aim to achieve generalizability to an unseen target domain by using only training data from the source domains.

Estimating individual treatment effect: generalization bounds and algorithms

clinicalml/cfrnet ICML 2017

We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation.

Optimal Auctions through Deep Learning: Advances in Differentiable Economics

saisrivatsan/deep-opt-auctions 12 Jun 2017

Designing an incentive compatible auction that maximizes expected revenue is an intricate task.

A Surprising Linear Relationship Predicts Test Performance in Deep Networks

brando90/Generalization-Puzzles-in-Deep-Networks 25 Jul 2018

Given two networks with the same training loss on a dataset, when would they have drastically different test losses and errors?

Deep Learning and the Information Bottleneck Principle

ChenLiu-1996/DiffusionSpectralEntropy 9 Mar 2015

Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle.

Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data

gkdziugaite/pacbayes-opt 31 Mar 2017

One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data.

Deep multi-Wasserstein unsupervised domain adaptation

CtrlZ1/Domain-Adaptation-Algorithms Pattern Recognition Letters 2019

In unsupervised domain adaptation (DA), 1 aims at learning from labeled source data and fully unlabeled target examples a model with a low error on the target domain.

Learning Robust State Abstractions for Hidden-Parameter Block MDPs

facebookresearch/mtrl ICLR 2021

Further, we provide transfer and generalization bounds based on task and state similarity, along with sample complexity bounds that depend on the aggregate number of samples across tasks, rather than the number of tasks, a significant improvement over prior work that use the same environment assumptions.