Search Results for author: Jonathan Shafer

Found 7 papers, 0 papers with code

A Trichotomy for Transductive Online Learning

no code implementations NeurIPS 2023 Steve Hanneke, Shay Moran, Jonathan Shafer

We present new upper and lower bounds on the number of learner mistakes in the `transductive' online learning setting of Ben-David, Kushilevitz and Mansour (1997).

The Bayesian Stability Zoo

no code implementations NeurIPS 2023 Shay Moran, Hilla Schefler, Jonathan Shafer

We show that many definitions of stability found in the learning theory literature are equivalent to one another.

Learning Theory

Fantastic Generalization Measures are Nowhere to be Found

no code implementations24 Sep 2023 Michael Gastpar, Ido Nachum, Jonathan Shafer, Thomas Weinberger

We study the notion of a generalization bound being uniformly tight, meaning that the difference between the bound and the population loss is small for all learning algorithms and all population distributions.

Generalization Bounds

PAC Verification of Statistical Algorithms

no code implementations28 Nov 2022 Saachi Mutreja, Jonathan Shafer

Showcasing our proposed definition, our final result is a protocol for the verification of statistical query algorithms that satisfy a combinatorial constraint on their queries.

PAC learning

Fine-Grained Distribution-Dependent Learning Curves

no code implementations31 Aug 2022 Olivier Bousquet, Steve Hanneke, Shay Moran, Jonathan Shafer, Ilya Tolstikhin

We solve this problem in a principled manner, by introducing a combinatorial dimension called VCL that characterizes the best $d'$ for which $d'/n$ is a strong minimax lower bound.

Learning Theory PAC learning

A Direct Sum Result for the Information Complexity of Learning

no code implementations16 Apr 2018 Ido Nachum, Jonathan Shafer, Amir Yehudayoff

We introduce a class of functions of VC dimension $d$ over the domain $\mathcal{X}$ with information complexity at least $\Omega\left(d\log \log \frac{|\mathcal{X}|}{d}\right)$ bits for any consistent and proper algorithm (deterministic or random).

Learners that Use Little Information

no code implementations14 Oct 2017 Raef Bassily, Shay Moran, Ido Nachum, Jonathan Shafer, Amir Yehudayoff

We discuss an approach that allows us to prove upper bounds on the amount of information that algorithms reveal about their inputs, and also provide a lower bound by showing a simple concept class for which every (possibly randomized) empirical risk minimizer must reveal a lot of information.

Cannot find the paper you are looking for? You can Submit a new open access paper.