Search Results for author: Jeffrey Negrea

Found 8 papers, 3 papers with code

Generalization via Derandomization

no code implementations ICML 2020 Jeffrey Negrea, Daniel Roy, Gintare Karolina Dziugaite

At the same time, we bound the risk of h^ in terms of a surrogate that is constructed by conditioning and shown to belong to a nonrandom class with uniformly small generalization error.

Concept Algebra for (Score-Based) Text-Controlled Generative Models

1 code implementation NeurIPS 2023 ZiHao Wang, Lin Gui, Jeffrey Negrea, Victor Veitch

This suggests these models have internal representations that encode concepts in a `disentangled' manner.

Tuning Stochastic Gradient Algorithms for Statistical Inference via Large-Sample Asymptotics

no code implementations25 Jul 2022 Jeffrey Negrea, Jun Yang, Haoyue Feng, Daniel M. Roy, Jonathan H. Huggins

The tuning of stochastic gradient algorithms (SGAs) for optimization and sampling is often based on heuristics and trial-and-error rather than generalizable theory.

Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers

1 code implementation NeurIPS 2021 Jeffrey Negrea, Blair Bilodeau, Nicolò Campolongo, Francesco Orabona, Daniel M. Roy

Quantile (and, more generally, KL) regret bounds, such as those achieved by NormalHedge (Chaudhuri, Freund, and Hsu 2009) and its variants, relax the goal of competing against the best individual expert to only competing against a majority of experts on adversarial data.

Relaxing the I.I.D. Assumption: Adaptively Minimax Optimal Regret via Root-Entropic Regularization

no code implementations13 Jul 2020 Blair Bilodeau, Jeffrey Negrea, Daniel M. Roy

setting, when the unknown constraint set is restricted to be a singleton, and the unconstrained adversarial setting, when the constraint set is the set of all distributions.

Sharpened Generalization Bounds based on Conditional Mutual Information and an Application to Noisy, Iterative Algorithms

no code implementations NeurIPS 2020 Mahdi Haghifam, Jeffrey Negrea, Ashish Khisti, Daniel M. Roy, Gintare Karolina Dziugaite

Finally, we apply these bounds to the study of Langevin dynamics algorithm, showing that conditioning on the super sample allows us to exploit information in the optimization trajectory to obtain tighter bounds based on hypothesis tests.

Generalization Bounds

In Defense of Uniform Convergence: Generalization via derandomization with an application to interpolating predictors

no code implementations9 Dec 2019 Jeffrey Negrea, Gintare Karolina Dziugaite, Daniel M. Roy

At the same time, we bound the risk of $\hat h$ in terms of surrogates constructed by conditioning and denoising, respectively, and shown to belong to nonrandom classes with uniformly small generalization error.

Denoising

Information-Theoretic Generalization Bounds for SGLD via Data-Dependent Estimates

1 code implementation NeurIPS 2019 Jeffrey Negrea, Mahdi Haghifam, Gintare Karolina Dziugaite, Ashish Khisti, Daniel M. Roy

In this work, we improve upon the stepwise analysis of noisy iterative learning algorithms initiated by Pensia, Jog, and Loh (2018) and recently extended by Bu, Zou, and Veeravalli (2019).

Generalization Bounds

Cannot find the paper you are looking for? You can Submit a new open access paper.