no code implementations • ICML 2020 • Jeffrey Negrea, Daniel Roy, Gintare Karolina Dziugaite
At the same time, we bound the risk of h^ in terms of a surrogate that is constructed by conditioning and shown to belong to a nonrandom class with uniformly small generalization error.
1 code implementation • NeurIPS 2023 • ZiHao Wang, Lin Gui, Jeffrey Negrea, Victor Veitch
This suggests these models have internal representations that encode concepts in a `disentangled' manner.
no code implementations • 25 Jul 2022 • Jeffrey Negrea, Jun Yang, Haoyue Feng, Daniel M. Roy, Jonathan H. Huggins
The tuning of stochastic gradient algorithms (SGAs) for optimization and sampling is often based on heuristics and trial-and-error rather than generalizable theory.
1 code implementation • NeurIPS 2021 • Jeffrey Negrea, Blair Bilodeau, Nicolò Campolongo, Francesco Orabona, Daniel M. Roy
Quantile (and, more generally, KL) regret bounds, such as those achieved by NormalHedge (Chaudhuri, Freund, and Hsu 2009) and its variants, relax the goal of competing against the best individual expert to only competing against a majority of experts on adversarial data.
no code implementations • 13 Jul 2020 • Blair Bilodeau, Jeffrey Negrea, Daniel M. Roy
setting, when the unknown constraint set is restricted to be a singleton, and the unconstrained adversarial setting, when the constraint set is the set of all distributions.
no code implementations • NeurIPS 2020 • Mahdi Haghifam, Jeffrey Negrea, Ashish Khisti, Daniel M. Roy, Gintare Karolina Dziugaite
Finally, we apply these bounds to the study of Langevin dynamics algorithm, showing that conditioning on the super sample allows us to exploit information in the optimization trajectory to obtain tighter bounds based on hypothesis tests.
no code implementations • 9 Dec 2019 • Jeffrey Negrea, Gintare Karolina Dziugaite, Daniel M. Roy
At the same time, we bound the risk of $\hat h$ in terms of surrogates constructed by conditioning and denoising, respectively, and shown to belong to nonrandom classes with uniformly small generalization error.
1 code implementation • NeurIPS 2019 • Jeffrey Negrea, Mahdi Haghifam, Gintare Karolina Dziugaite, Ashish Khisti, Daniel M. Roy
In this work, we improve upon the stepwise analysis of noisy iterative learning algorithms initiated by Pensia, Jog, and Loh (2018) and recently extended by Bu, Zou, and Veeravalli (2019).