no code implementations • NeurIPS 2023 • Steve Hanneke, Shay Moran, Jonathan Shafer
We present new upper and lower bounds on the number of learner mistakes in the `transductive' online learning setting of Ben-David, Kushilevitz and Mansour (1997).
no code implementations • NeurIPS 2023 • Shay Moran, Hilla Schefler, Jonathan Shafer
We show that many definitions of stability found in the learning theory literature are equivalent to one another.
no code implementations • 24 Sep 2023 • Michael Gastpar, Ido Nachum, Jonathan Shafer, Thomas Weinberger
We study the notion of a generalization bound being uniformly tight, meaning that the difference between the bound and the population loss is small for all learning algorithms and all population distributions.
no code implementations • 28 Nov 2022 • Saachi Mutreja, Jonathan Shafer
Showcasing our proposed definition, our final result is a protocol for the verification of statistical query algorithms that satisfy a combinatorial constraint on their queries.
no code implementations • 31 Aug 2022 • Olivier Bousquet, Steve Hanneke, Shay Moran, Jonathan Shafer, Ilya Tolstikhin
We solve this problem in a principled manner, by introducing a combinatorial dimension called VCL that characterizes the best $d'$ for which $d'/n$ is a strong minimax lower bound.
no code implementations • 16 Apr 2018 • Ido Nachum, Jonathan Shafer, Amir Yehudayoff
We introduce a class of functions of VC dimension $d$ over the domain $\mathcal{X}$ with information complexity at least $\Omega\left(d\log \log \frac{|\mathcal{X}|}{d}\right)$ bits for any consistent and proper algorithm (deterministic or random).
no code implementations • 14 Oct 2017 • Raef Bassily, Shay Moran, Ido Nachum, Jonathan Shafer, Amir Yehudayoff
We discuss an approach that allows us to prove upper bounds on the amount of information that algorithms reveal about their inputs, and also provide a lower bound by showing a simple concept class for which every (possibly randomized) empirical risk minimizer must reveal a lot of information.