no code implementations • 29 Jun 2023 • Jaouad Mourtada, Tomas Vaškevičius, Nikita Zhivotovskiy
In this paper, we revisit and tighten classical results in the theory of aggregation in the statistical setting by replacing the global complexity with a smaller, local one.
no code implementations • 13 Mar 2023 • Jaouad Mourtada
We study sequential probability assignment in the Gaussian setting, where the goal is to predict, or equivalently compress, a sequence of real-valued observations almost as well as the best Gaussian distribution with mean constrained to a given subset of $\mathbf{R}^n$.
no code implementations • 16 Mar 2022 • Jaouad Mourtada, Lorenzo Rosasco
In this note, we provide an elementary analysis of the prediction error of ridge regression with random design.
no code implementations • 25 Feb 2021 • Jaouad Mourtada, Tomas Vaškevičius, Nikita Zhivotovskiy
In this distribution-free regression setting, we show that boundedness of the conditional second moment of the response given the covariates is a necessary and sufficient condition for achieving nontrivial guarantees.
no code implementations • 17 Jun 2020 • Andrea Della Vecchia, Jaouad Mourtada, Ernesto de Vito, Lorenzo Rosasco
We study a natural extension of classical empirical risk minimization, where the hypothesis space is a random subspace of a given space.
no code implementations • 11 Jun 2020 • Dominic Richards, Jaouad Mourtada, Lorenzo Rosasco
We analyze the prediction error of ridge regression in an asymptotic regime where the sample size and dimension go to infinity at a proportional rate.
no code implementations • 23 Dec 2019 • Jaouad Mourtada, Stéphane Gaïffas
On standard examples, this bound scales as $d/n$ with $d$ the model dimension and $n$ the sample size, and critically remains valid under model misspecification.
no code implementations • 23 Dec 2019 • Jaouad Mourtada
We express the minimax risk in terms of the distribution of statistical leverage scores of individual samples, and deduce a minimax lower bound of $d/(n-d+1)$ for any covariate distribution, nearly matching the risk for Gaussian design.
2 code implementations • 25 Jun 2019 • Jaouad Mourtada, Stéphane Gaïffas, Erwan Scornet
Using a variant of the Context Tree Weighting algorithm, we show that it is possible to efficiently perform an exact aggregation over all prunings of the trees; in particular, this enables to obtain a truly online parameter-free algorithm which is competitive with the optimal pruning of the Mondrian tree, and thus adaptive to the unknown regularity of the regression function.
no code implementations • 5 Sep 2018 • Jaouad Mourtada, Stéphane Gaïffas
Moreover, our analysis exhibits qualitative differences with other variants of the Hedge algorithm, such as the fixed-horizon version (with constant learning rate) and the one based on the so-called "doubling trick", both of which fail to adapt to the easier stochastic setting.
no code implementations • 15 Mar 2018 • Jaouad Mourtada, Stéphane Gaïffas, Erwan Scornet
Our results include consistency and convergence rates for Mondrian Trees and Forests, that turn out to be minimax optimal on the set of $s$-H\"older function with $s \in (0, 1]$ (for trees and forests) and $s \in (1, 2]$ (for forests only), assuming a proper tuning of their complexity parameter in both cases.
no code implementations • NeurIPS 2017 • Jaouad Mourtada, Stéphane Gaïffas, Erwan Scornet
We establish the consistency of an algorithm of Mondrian Forests, a randomized classification algorithm that can be implemented online.
no code implementations • 31 Aug 2017 • Jaouad Mourtada, Odalric-Ambrym Maillard
By contrast, designing strategies that both achieve a near-optimal regret and maintain a reasonable number of weights is highly non-trivial.