Search Results for author: Tim van Erven

Found 21 papers, 2 papers with code

Generalization Guarantees via Algorithm-dependent Rademacher Complexity

no code implementations4 Jul 2023 Sarah Sachs, Tim van Erven, Liam Hodgkinson, Rajiv Khanna, Umut Simsekli

Algorithm- and data-dependent generalization bounds are required to explain the generalization behavior of modern machine learning algorithms.

Generalization Bounds

The Risks of Recourse in Binary Classification

1 code implementation1 Jun 2023 Hidde Fokkema, Damien Garreau, Tim van Erven

Algorithmic recourse provides explanations that help users overturn an unfavorable decision by a machine learning system.

Binary Classification Classification

Accelerated Rates between Stochastic and Adversarial Online Convex Optimization

no code implementations6 Mar 2023 Sarah Sachs, Hedi Hadiji, Tim van Erven, Cristobal Guzman

In the fully adversarial case our bounds gracefully deteriorate to match the minimax regret.

Modifying Squint for Prediction with Expert Advice in a Changing Environment

no code implementations14 Sep 2022 Thom Neuteboom, Tim van Erven

Hence, we provide a new algorithm, Squint-CE, which is suitable for a changing environment and preserves the properties of Squint.

Attribution-based Explanations that Provide Recourse Cannot be Robust

no code implementations31 May 2022 Hidde Fokkema, Rianne de Heide, Tim van Erven

Finally, we strengthen our impossibility result for the restricted case where users are only able to change a single attribute of $x$, by providing an exact characterization of the functions $f$ to which impossibility applies.

Attribute BIG-bench Machine Learning +1

Between Stochastic and Adversarial Online Convex Optimization: Improved Regret Bounds via Smoothness

no code implementations15 Feb 2022 Sarah Sachs, Hédi Hadiji, Tim van Erven, Cristóbal Guzmán

case, our bounds match the rates one would expect from results in stochastic acceleration, and in the fully adversarial case they gracefully deteriorate to match the minimax regret.

Scale-free Unconstrained Online Learning for Curved Losses

no code implementations11 Feb 2022 Jack J. Mayo, Hédi Hadiji, Tim van Erven

We follow up on this observation by showing that there is in fact never a price to pay for adaptivity if we specialise to any of the other common supervised online learning losses: our results cover log loss, (linear and non-parametric) logistic regression, square loss prediction, and (linear and non-parametric) least-squares regression.

Computational Efficiency regression

Robust Online Convex Optimization in the Presence of Outliers

no code implementations5 Jul 2021 Tim van Erven, Sarah Sachs, Wouter M. Koolen, Wojciech Kotłowski

If the outliers are chosen adversarially, we show that a simple filtering strategy on extreme gradients incurs O(k) additive overhead compared to the usual regret bounds, and that this is unimprovable, which means that k needs to be sublinear in the number of rounds.

Distributed Online Learning for Joint Regret with Communication Constraints

no code implementations15 Feb 2021 Dirk van der Hoeven, Hédi Hadiji, Tim van Erven

Each round, an adversary first activates one of the agents to issue a prediction and provides a corresponding gradient, and then the agents are allowed to send a $b$-bit message to their neighbors in the graph.

MetaGrad: Adaptation using Multiple Learning Rates in Online Learning

no code implementations12 Feb 2021 Tim van Erven, Wouter M. Koolen, Dirk van der Hoeven

We provide a new adaptive method for online convex optimization, MetaGrad, that is robust to general convex losses but achieves faster rates for a broad class of special functions, including exp-concave and strongly convex functions, but also various types of stochastic and non-stochastic functions without any curvature.

Explaining Predictions by Approximating the Local Decision Boundary

no code implementations14 Jun 2020 Georgios Vlassopoulos, Tim van Erven, Henry Brighton, Vlado Menkovski

We address this by introducing a new benchmark data set with artificially generated Iris images, and showing that we can recover the latent attributes that locally determine the class.

Attribute

Lipschitz Adaptivity with Multiple Learning Rates in Online Learning

no code implementations27 Feb 2019 Zakaria Mhammedi, Wouter M. Koolen, Tim van Erven

For MetaGrad, we further improve the computational efficiency of handling constraints on the domain of prediction, and we remove the need to specify the number of rounds in advance.

Active Learning Computational Efficiency

The Many Faces of Exponential Weights in Online Learning

no code implementations21 Feb 2018 Dirk van der Hoeven, Tim van Erven, Wojciech Kotłowski

A standard introduction to online learning might place Online Gradient Descent at its center and then proceed to develop generalizations and extensions like Online Mirror Descent and second-order methods.

Second-order methods

Combining Adversarial Guarantees and Stochastic Fast Rates in Online Learning

no code implementations NeurIPS 2016 Wouter M. Koolen, Peter Grünwald, Tim van Erven

We consider online learning algorithms that guarantee worst-case regret rates in adversarial environments (so they can be deployed safely and will perform robustly), yet adapt optimally to favorable stochastic environments (so they will perform well in a variety of settings of practical importance).

MetaGrad: Multiple Learning Rates in Online Learning

1 code implementation NeurIPS 2016 Tim van Erven, Wouter M. Koolen

In online convex optimization it is well known that certain subclasses of objective functions are much easier than arbitrary convex functions.

Fast rates in statistical and online learning

no code implementations9 Jul 2015 Tim van Erven, Peter D. Grünwald, Nishant A. Mehta, Mark D. Reid, Robert C. Williamson

For bounded losses, we show how the central condition enables a direct proof of fast rates and we prove its equivalence to the Bernstein condition, itself a generalization of the Tsybakov margin condition, both of which have played a central role in obtaining fast rates in statistical learning.

Density Estimation Learning Theory

Second-order Quantile Methods for Experts and Combinatorial Games

no code implementations27 Feb 2015 Wouter M. Koolen, Tim van Erven

We aim to design strategies for sequential decision making that adjust to the difficulty of the learning problem.

Decision Making

Learning the Learning Rate for Prediction with Expert Advice

no code implementations NeurIPS 2014 Wouter M. Koolen, Tim van Erven, Peter Grünwald

Most standard algorithms for prediction with expert advice depend on a parameter called the learning rate.

PAC-Bayes Mini-tutorial: A Continuous Union Bound

no code implementations7 May 2014 Tim van Erven

When I first encountered PAC-Bayesian concentration inequalities they seemed to me to be rather disconnected from good old-fashioned results like Hoeffding's and Bernstein's inequalities.

BIG-bench Machine Learning Relation

A Second-order Bound with Excess Losses

no code implementations10 Feb 2014 Pierre Gaillard, Gilles Stoltz, Tim van Erven

We study online aggregation of the predictions of experts, and first show new second-order regret bounds in the standard setting, which are obtained via a version of the Prod algorithm (and also a version of the polynomially weighted average algorithm) with multiple learning rates.

Rényi Divergence and Kullback-Leibler Divergence

no code implementations12 Jun 2012 Tim van Erven, Peter Harremoës

R\'enyi divergence is related to R\'enyi entropy much like Kullback-Leibler divergence is related to Shannon's entropy, and comes up in many settings.

Cannot find the paper you are looking for? You can Submit a new open access paper.