Generalization Bounds
142 papers with code • 0 benchmarks • 0 datasets
Benchmarks
These leaderboards are used to track progress in Generalization Bounds
Most implemented papers
Bridging Theory and Algorithm for Domain Adaptation
We introduce Margin Disparity Discrepancy, a novel measurement with rigorous generalization bounds, tailored to the distribution comparison with the asymmetric margin loss, and to the minimax optimization for easier training.
Estimating individual treatment effect: generalization bounds and algorithms
We give a novel, simple and intuitive generalization-error bound showing that the expected ITE estimation error of a representation is bounded by a sum of the standard generalization-error of that representation and the distance between the treated and control distributions induced by the representation.
SWAD: Domain Generalization by Seeking Flat Minima
Domain generalization (DG) methods aim to achieve generalizability to an unseen target domain by using only training data from the source domains.
Computing Nonvacuous Generalization Bounds for Deep (Stochastic) Neural Networks with Many More Parameters than Training Data
One of the defining properties of deep learning is that models are chosen to have many more parameters than available training data.
Optimal Auctions through Deep Learning: Advances in Differentiable Economics
Designing an incentive compatible auction that maximizes expected revenue is an intricate task.
A Surprising Linear Relationship Predicts Test Performance in Deep Networks
Given two networks with the same training loss on a dataset, when would they have drastically different test losses and errors?
PACOH: Bayes-Optimal Meta-Learning with PAC-Guarantees
Meta-learning can successfully acquire useful inductive biases from data.
Robust Fine-Tuning of Deep Neural Networks with Hessian-based Generalization Guarantees
We study the generalization properties of fine-tuning to understand the problem of overfitting, which has often been observed (e. g., when the target dataset is small or when the training labels are noisy).
Deep Learning and the Information Bottleneck Principle
Deep Neural Networks (DNNs) are analyzed via the theoretical framework of the information bottleneck (IB) principle.
Deep multi-Wasserstein unsupervised domain adaptation
In unsupervised domain adaptation (DA), 1 aims at learning from labeled source data and fully unlabeled target examples a model with a low error on the target domain.