no code implementations • ICML 2020 • Blair Bilodeau, Dylan Foster, Daniel Roy
We study the classical problem of forecasting under logarithmic loss while competing against an arbitrary class of experts.
1 code implementation • 7 Jun 2023 • Robert Geirhos, Roland S. Zimmermann, Blair Bilodeau, Wieland Brendel, Been Kim
Today, visualization methods form the foundation of our knowledge about the internal workings of neural networks, as a type of mechanistic interpretability.
1 code implementation • 22 Dec 2022 • Blair Bilodeau, Natasha Jaques, Pang Wei Koh, Been Kim
Despite a sea of interpretability methods that can produce plausible explanations, the field has also empirically seen many failure cases of such methods.
1 code implementation • 10 Feb 2022 • Blair Bilodeau, Linbo Wang, Daniel M. Roy
In this work, we formalize and study this notion of adaptivity, and provide a novel algorithm that simultaneously achieves (a) optimal regret when a d-separator is observed, improving on classical minimax algorithms, and (b) significantly smaller regret than recent causal bandit algorithms when the observed variables are not a d-separator.
1 code implementation • NeurIPS 2021 • Jeffrey Negrea, Blair Bilodeau, Nicolò Campolongo, Francesco Orabona, Daniel M. Roy
Quantile (and, more generally, KL) regret bounds, such as those achieved by NormalHedge (Chaudhuri, Freund, and Hsu 2009) and its variants, relax the goal of competing against the best individual expert to only competing against a majority of experts on adversarial data.
no code implementations • 13 Jul 2020 • Blair Bilodeau, Jeffrey Negrea, Daniel M. Roy
setting, when the unknown constraint set is restricted to be a singleton, and the unconstrained adversarial setting, when the constraint set is the set of all distributions.
no code implementations • 2 Jul 2020 • Blair Bilodeau, Dylan J. Foster, Daniel M. Roy
We consider the classical problem of sequential probability assignment under logarithmic loss while competing against an arbitrary, potentially nonparametric class of experts.