no code implementations • 29 Jul 2024 • Steve Hanneke, Kasper Green Larsen, Nikita Zhivotovskiy
This simple algorithm is known to have an optimal error in terms of the VC-dimension of $\mathcal{H}$ and the number of samples $n$.
no code implementations • 12 Mar 2024 • Ishaq Aden-Ali, Mikael Møller Høgsgaard, Kasper Green Larsen, Nikita Zhivotovskiy
Furthermore, we prove a near-optimal high-probability bound on this algorithm's error.
no code implementations • 15 Aug 2023 • Dirk van der Hoeven, Nikita Zhivotovskiy, Nicolò Cesa-Bianchi
Online learning methods yield sequential regret bounds under minimal assumptions and provide in-expectation risk bounds for statistical learning.
no code implementations • 29 Jun 2023 • Jaouad Mourtada, Tomas Vaškevičius, Nikita Zhivotovskiy
In this paper, we revisit and tighten classical results in the theory of aggregation in the statistical setting by replacing the global complexity with a smaller, local one.
no code implementations • 18 Apr 2023 • Ishaq Aden-Ali, Yeshwanth Cherapanamjeri, Abhishek Shetty, Nikita Zhivotovskiy
In this paper, we address this issue by providing optimal high probability risk bounds through a framework that surpasses the limitations of uniform convergence arguments.
no code implementations • 21 Feb 2023 • Nikita Puchkin, Nikita Zhivotovskiy
We consider the problem of stochastic convex optimization with exp-concave losses using Empirical Risk Minimization in a convex class.
no code implementations • 21 Jan 2023 • Arshak Minasyan, Nikita Zhivotovskiy
Assume that $X_{1}, \ldots, X_{N}$ is an $\varepsilon$-contaminated sample of $N$ independent Gaussian vectors in $\mathbb{R}^d$ with mean $\mu$ and covariance $\Sigma$.
no code implementations • 28 Dec 2022 • Wolfgang Karl Härdle, Yegor Klochkov, Alla Petukhina, Nikita Zhivotovskiy
Markowitz mean-variance portfolios with sample mean and covariance as input parameters feature numerous issues in practice.
no code implementations • 19 Dec 2022 • Ishaq Aden-Ali, Yeshwanth Cherapanamjeri, Abhishek Shetty, Nikita Zhivotovskiy
In one of the first COLT open problems, Warmuth conjectured that this prediction strategy always implies an optimal high probability bound on the risk, and hence is also an optimal PAC algorithm.
no code implementations • 6 Jun 2022 • Dirk van der Hoeven, Nikita Zhivotovskiy, Nicolò Cesa-Bianchi
We prove that a variant of EWA either achieves a negative regret (i. e., the algorithm outperforms the best expert), or guarantees a $O(\log K)$ bound on both variance and regret.
no code implementations • NeurIPS 2021 • Yegor Klochkov, Nikita Zhivotovskiy
The sharpest known high probability generalization bounds for uniformly stable algorithms (Feldman, Vondr\'{a}k, 2018, 2019), (Bousquet, Klochkov, Zhivotovskiy, 2020) contain a generally inevitable sampling error term of order $\Theta(1/\sqrt{n})$.
no code implementations • 25 Feb 2021 • Jaouad Mourtada, Tomas Vaškevičius, Nikita Zhivotovskiy
In this distribution-free regression setting, we show that boundedness of the conditional second moment of the response given the covariates is a necessary and sufficient condition for achieving nontrivial guarantees.
no code implementations • 31 Jan 2021 • Nikita Puchkin, Nikita Zhivotovskiy
We show that in pool-based active classification without assumptions on the underlying distribution, if the learner is given the power to abstain from some predictions by paying the price marginally smaller than the average loss $1/2$ of a random guess, exponential savings in the number of label requests are possible whenever they are possible in the corresponding realizable problem.
no code implementations • 22 Oct 2020 • Luc Devroye, Silvio Lattanzi, Gabor Lugosi, Nikita Zhivotovskiy
We study the problem of estimating the common mean $\mu$ of $n$ independent symmetric random variables with different and unknown standard deviations $\sigma_1 \le \sigma_2 \le \cdots \le\sigma_n$.
no code implementations • 19 Sep 2020 • Tomas Vaškevičius, Nikita Zhivotovskiy
We study the problem of predicting as well as the best linear predictor in a bounded Euclidean ball with respect to the squared loss.
no code implementations • 24 May 2020 • Olivier Bousquet, Steve Hanneke, Shay Moran, Nikita Zhivotovskiy
It has been recently shown by Hanneke (2016) that the optimal sample complexity of PAC learning for any VC class C is achieved by a particular improper learning algorithm, which outputs a specific majority-vote of hypotheses in C. This leaves the question of when this bound can be achieved by proper learning algorithms, which are restricted to always output a hypothesis from C. In this paper we aim to characterize the classes for which the optimal sample complexity can be achieved by a proper learning algorithm.
no code implementations • 6 Feb 2020 • Yegor Klochkov, Alexey Kroshnin, Nikita Zhivotovskiy
We consider the robust algorithms for the $k$-means clustering problem where a quantizer is constructed based on $N$ independent observations.
no code implementations • 28 Jan 2020 • Gergely Neu, Nikita Zhivotovskiy
In the setting of sequential prediction of individual $\{0, 1\}$-sequences with expert advice, we show that by allowing the learner to abstain from the prediction by paying a cost marginally smaller than $\frac 12$ (say, $0. 49$), it is possible to achieve expected regret bounds that are independent of the time horizon $T$.
no code implementations • 28 Oct 2019 • Olivier Bousquet, Nikita Zhivotovskiy
First, we consider classification with a reject option, namely Chow's reject option model, and show that by slightly lowering the impact of hard instances, a learning rate of order $O\left(\frac{d}{n}\log \frac{n}{d}\right)$ is always achievable in the agnostic setting by a specific learning algorithm.
no code implementations • 17 Oct 2019 • Olivier Bousquet, Yegor Klochkov, Nikita Zhivotovskiy
In a series of recent breakthrough papers by Feldman and Vondrak (2018, 2019), it was shown that the best known high probability upper bounds for uniformly stable learning algorithms due to Bousquet and Elisseef (2002) are sub-optimal in some natural regimes.
no code implementations • 28 Nov 2017 • Andrey Kupavskii, Nikita Zhivotovskiy
In many interesting situations the size of epsilon-nets depends only on $\epsilon$ together with different complexity measures.
no code implementations • 12 May 2015 • Ilya Tolstikhin, Nikita Zhivotovskiy, Gilles Blanchard
This paper introduces a new complexity measure for transductive learning called Permutational Rademacher Complexity (PRC) and studies its properties.