Search Results for author: Blair Bilodeau

Found 7 papers, 4 papers with code

Improved Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance

no code implementations ICML 2020 Blair Bilodeau, Dylan Foster, Daniel Roy

We study the classical problem of forecasting under logarithmic loss while competing against an arbitrary class of experts.

Don't trust your eyes: on the (un)reliability of feature visualizations

1 code implementation7 Jun 2023 Robert Geirhos, Roland S. Zimmermann, Blair Bilodeau, Wieland Brendel, Been Kim

Today, visualization methods form the foundation of our knowledge about the internal workings of neural networks, as a type of mechanistic interpretability.

Impossibility Theorems for Feature Attribution

1 code implementation22 Dec 2022 Blair Bilodeau, Natasha Jaques, Pang Wei Koh, Been Kim

Despite a sea of interpretability methods that can produce plausible explanations, the field has also empirically seen many failure cases of such methods.

Adaptively Exploiting d-Separators with Causal Bandits

1 code implementation10 Feb 2022 Blair Bilodeau, Linbo Wang, Daniel M. Roy

In this work, we formalize and study this notion of adaptivity, and provide a novel algorithm that simultaneously achieves (a) optimal regret when a d-separator is observed, improving on classical minimax algorithms, and (b) significantly smaller regret than recent causal bandit algorithms when the observed variables are not a d-separator.

Minimax Optimal Quantile and Semi-Adversarial Regret via Root-Logarithmic Regularizers

1 code implementation NeurIPS 2021 Jeffrey Negrea, Blair Bilodeau, Nicolò Campolongo, Francesco Orabona, Daniel M. Roy

Quantile (and, more generally, KL) regret bounds, such as those achieved by NormalHedge (Chaudhuri, Freund, and Hsu 2009) and its variants, relax the goal of competing against the best individual expert to only competing against a majority of experts on adversarial data.

Relaxing the I.I.D. Assumption: Adaptively Minimax Optimal Regret via Root-Entropic Regularization

no code implementations13 Jul 2020 Blair Bilodeau, Jeffrey Negrea, Daniel M. Roy

setting, when the unknown constraint set is restricted to be a singleton, and the unconstrained adversarial setting, when the constraint set is the set of all distributions.

Tight Bounds on Minimax Regret under Logarithmic Loss via Self-Concordance

no code implementations2 Jul 2020 Blair Bilodeau, Dylan J. Foster, Daniel M. Roy

We consider the classical problem of sequential probability assignment under logarithmic loss while competing against an arbitrary, potentially nonparametric class of experts.

Cannot find the paper you are looking for? You can Submit a new open access paper.