no code implementations • 10 Jul 2024 • Simone Fioravanti, Steve Hanneke, Shay Moran, Hilla Schefler, Iska Tsubari

This naturally raises the question of whether DP learnability continues to imply online learnability in more general scenarios: indeed, Alon, Hanneke, Holzman, and Moran (2021) explicitly leave it as an open question in the context of partial concept classes, and the same question is open in the general multiclass setting.

no code implementations • 22 Jun 2024 • Roi Livni, Shay Moran, Kobbi Nissim, Chirag Pabbaraju

Our framework extends well-studied notions of stability, including Differential Privacy ($k = 0$), differentially private learning with public data (where the $k$ public datapoints are fixed in advance), and stable sample compression (where the $k$ datapoints are selected adaptively by the algorithm).

no code implementations • 18 Jun 2024 • Liad Erez, Alon Cohen, Tomer Koren, Yishay Mansour, Shay Moran

We study multiclass PAC learning with bandit feedback, where inputs are classified into one of $K$ possible labels and feedback is limited to whether or not the predicted labels are correct.

no code implementations • 15 Jun 2024 • Marco Bressan, Nicolò Cesa-Bianchi, Emmanuel Esposito, Yishay Mansour, Shay Moran, Maximilian Thiessen

In particular, we consider the approximation of a binary concept $c$ by decision trees based on a simple class $\mathcal{H}$ (e. g., of bounded VC dimension), and use the tree depth as a measure of complexity.

no code implementations • 27 May 2024 • Zachary Chase, Bogdan Chornomaz, Steve Hanneke, Shay Moran, Amir Yehudayoff

In particular, we prove that for every $d$ there is a class with VC dimension $d$ that cannot be embedded in any extremal class of VC dimension smaller than exponential in $d$.

no code implementations • 16 May 2024 • Liad Erez, Alon Cohen, Tomer Koren, Yishay Mansour, Shay Moran

We revisit the classical problem of multiclass classification with bandit feedback (Kakade, Shalev-Shwartz and Tewari, 2008), where each input classifies to one of $K$ possible labels and feedback is restricted to whether the predicted label is correct or not.

no code implementations • 16 Mar 2024 • Steve Hanneke, Shay Moran, Tom Waknine

In classical PAC learning, both uniform convergence and sample compression satisfy a form of `completeness': whenever a class is learnable, it can also be learned by a learning rule that adheres to these principles.

no code implementations • 12 Mar 2024 • Marek Elias, Haim Kaplan, Yishay Mansour, Shay Moran

Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.

no code implementations • 29 Feb 2024 • Lee Cohen, Yishay Mansour, Shay Moran, Han Shao

We essentially show that any learnable class is also strategically learnable: we first consider a fully informative setting, where the manipulation structure (which is modeled by a manipulation graph $G^\star$) is known and during training time the learner has access to both the pre-manipulation data and post-manipulation data.

no code implementations • 12 Feb 2024 • Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran

We demonstrate that the optimal mistake bound under bandit feedback is at most $O(k)$ times higher than the optimal mistake bound in the full information case, where $k$ represents the number of labels.

no code implementations • NeurIPS 2023 • Steve Hanneke, Shay Moran, Jonathan Shafer

We present new upper and lower bounds on the number of learner mistakes in the `transductive' online learning setting of Ben-David, Kushilevitz and Mansour (1997).

no code implementations • 2 Nov 2023 • Zachary Chase, Bogdan Chornomaz, Shay Moran, Amir Yehudayoff

To offer a broader and more comprehensive view of our topological approach, we prove a local variant of the Borsuk-Ulam theorem in topology and a result in combinatorics concerning Kneser colorings.

no code implementations • NeurIPS 2023 • Shay Moran, Hilla Schefler, Jonathan Shafer

We show that many definitions of stability found in the learning theory literature are equivalent to one another.

no code implementations • 5 Jul 2023 • Steve Hanneke, Shay Moran, Qian Zhang

Pseudo-cubes are a structure, rooted in the work of Daniely and Shalev-Shwartz (2014), and recently shown by Brukhim, Carmon, Dinur, Moran, and Yehudayoff (2022) to characterize PAC learnability (i. e., uniform rates) for multiclass classification.

no code implementations • NeurIPS 2023 • Surbhi Goel, Steve Hanneke, Shay Moran, Abhishek Shetty

We study the problem of sequential prediction in the stochastic setting with an adversary that is allowed to inject clean-label adversarial (or out-of-distribution) examples.

no code implementations • 24 May 2023 • Niva Elkin-Koren, Uri Hacohen, Roi Livni, Shay Moran

In this work, we examine whether such algorithmic stability techniques are suitable to ensure the responsible use of generative models without inadvertently violating copyright laws.

no code implementations • 23 May 2023 • Alkis Kalavasis, Amin Karbasi, Shay Moran, Grigoris Velegkas

When two different parties use the same learning rule on their own data, how can we test whether the distributions of the two outcomes are similar?

no code implementations • 8 Apr 2023 • Noga Alon, Shay Moran, Hilla Schefler, Amir Yehudayoff

Learning $\mathcal{H}$ under pure DP is captured by the fractional clique number of $G$.

no code implementations • 7 Apr 2023 • Zachary Chase, Shay Moran, Amir Yehudayoff

Impagliazzo et al. showed how to boost any replicable algorithm so that it produces the same output with probability arbitrarily close to 1.

no code implementations • 30 Mar 2023 • Steve Hanneke, Shay Moran, Vinod Raman, Unique Subedi, Ambuj Tewari

We argue that the best expert has regret at most Littlestone dimension relative to the best concept in the class.

no code implementations • 27 Mar 2023 • Shay Moran, Ohad Sharon, Iska Tsubari, Sivan Yosebashvili

This dimension is a variation of the classical Littlestone dimension with the difference that binary mistake trees are replaced with $(k+1)$-ary mistake trees, where $k$ is the number of labels in the list.

no code implementations • 27 Feb 2023 • Haim Kaplan, Yishay Mansour, Shay Moran, Kobbi Nissim, Uri Stemmer

In this work we introduce an interactive variant of joint differential privacy towards handling online processes in which existing privacy definitions seem too restrictive.

no code implementations • 27 Feb 2023 • Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran

We prove an analogous result for randomized learners: we show that the optimal expected mistake bound in learning a class $\mathcal{H}$ equals its randomized Littlestone dimension, which is the largest $d$ for which there exists a tree shattered by $\mathcal{H}$ whose average depth is $2d$.

no code implementations • 9 Dec 2022 • Maryanthe Malliaris, Shay Moran

This paper is about the surprising interaction of a foundational result from model theory about stability of theories, which seems to be inherently about the infinite, with algorithmic stability in learning.

no code implementations • 8 Dec 2022 • Olivier Bousquet, Haim Kaplan, Aryeh Kontorovich, Yishay Mansour, Shay Moran, Menachem Sadigurschi, Uri Stemmer

We construct a universally Bayes consistent learning rule that satisfies differential privacy (DP).

no code implementations • 6 Oct 2022 • Steve Hanneke, Amin Karbasi, Mohammad Mahmoody, Idan Mehalel, Shay Moran

In this work we aim to characterize the smallest achievable error $\epsilon=\epsilon(\eta)$ by the learner in the presence of such an adversary in both realizable and agnostic settings.

no code implementations • 31 Aug 2022 • Olivier Bousquet, Steve Hanneke, Shay Moran, Jonathan Shafer, Ilya Tolstikhin

We solve this problem in a principled manner, by introducing a combinatorial dimension called VCL that characterizes the best $d'$ for which $d'/n$ is a strong minimax lower bound.

1 code implementation • 1 Jul 2022 • Ron Amit, Baruch Epstein, Shay Moran, Ron Meir

We present a PAC-Bayes-style generalization bound which enables the replacement of the KL-divergence with a variety of Integral Probability Metrics (IPM).

no code implementations • 29 Jun 2022 • Mahdi Haghifam, Shay Moran, Daniel M. Roy, Gintare Karolina Dziugaite

These leave-one-out variants of the conditional mutual information (CMI) of an algorithm (Steinke and Zakynthinou, 2020) are also seen to control the mean generalization error of learning algorithms with bounded loss functions.

no code implementations • 9 Jun 2022 • Yuval Filmus, Idan Mehalel, Shay Moran

Given a learning task where the data is distributed among several parties, communication is one of the fundamental resources which the parties would like to minimize.

no code implementations • 10 Apr 2022 • Gal Yona, Shay Moran, Gal Elidan, Amir Globerson

We show that there is a natural class where this approach is sub-optimal, and that there is a more comparison-efficient active learning scheme.

no code implementations • 3 Mar 2022 • Nataly Brukhim, Daniel Carmon, Irit Dinur, Shay Moran, Amir Yehudayoff

This work resolves this problem: we characterize multiclass PAC learnability through the DS dimension, a combinatorial dimension defined by Daniely and Shalev-Shwartz (2014).

no code implementations • 10 Feb 2022 • Olivier Bousquet, Amit Daniely, Haim Kaplan, Yishay Mansour, Shay Moran, Uri Stemmer

Our transformation readily implies monotone learners in a variety of contexts: for example it extends Pestov's result to classification tasks with an arbitrary number of labels.

no code implementations • NeurIPS 2021 • Nataly Brukhim, Elad Hazan, Shay Moran, Indraneel Mukherjee, Robert E. Schapire

Here, we focus on an especially natural formulation in which the weak hypotheses are assumed to belong to an ''easy-to-learn'' base class, and the weak learner is an agnostic PAC learner for that class with respect to the standard classification loss.

no code implementations • 19 Nov 2021 • Kunal Dutta, Arijit Ghosh, Shay Moran

We study the connections between three seemingly different combinatorial structures - "uniform" brackets in statistics and probability theory, "containers" in online and distributed learning theory, and "combinatorial Macbeath regions", or Mnets in discrete and computational geometry.

no code implementations • NeurIPS 2021 • Mahdi Haghifam, Gintare Karolina Dziugaite, Shay Moran, Daniel M. Roy

We further show that an inherent limitation of proper learning of VC classes contradicts the existence of a proper learner with constant CMI, and it implies a negative resolution to an open problem of Steinke and Zakynthinou (2020).

no code implementations • 17 Aug 2021 • Olivier Bousquet, Mark Braverman, Klim Efremenko, Gillat Kol, Shay Moran

We derive an optimal $2$-approximation learning strategy for the Hypothesis Selection problem, outputting $q$ such that $\mathsf{TV}(p, q) \leq2 \cdot opt + \eps$, with a (nearly) optimal sample complexity of~$\tilde O(\log n/\epsilon^2)$.

no code implementations • 12 Aug 2021 • Maryanthe Malliaris, Shay Moran

We use algorithmic methods from online learning to revisit a key idea from the interaction of model theory and combinatorics, the existence of large "indivisible" sets, called "$\epsilon$-excellent," in $k$-edge stable graphs (equivalently, Littlestone classes).

no code implementations • 18 Jul 2021 • Noga Alon, Steve Hanneke, Ron Holzman, Shay Moran

In fact we exhibit easy-to-learn partial concept classes which provably cannot be captured by the traditional PAC theory.

no code implementations • 2 Feb 2021 • Steve Hanneke, Roi Livni, Shay Moran

More precisely, given any concept class C and any hypothesis class H, we provide nearly tight bounds (up to a log factor) on the optimal mistake bounds for online learning C using predictors from H. Our bound yields an exponential improvement over the previously best known bound by Chase and Freitag (2020).

no code implementations • 22 Jan 2021 • Noga Alon, Omri Ben-Eliezer, Yuval Dagan, Shay Moran, Moni Naor, Eylon Yogev

Laws of large numbers guarantee that given a large enough sample from some population, the measure of any fixed sub-population is well-estimated by its frequency in the sample.

no code implementations • NeurIPS 2020 • Olivier Bousquet, Roi Livni, Shay Moran

We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient).

no code implementations • 9 Nov 2020 • Olivier Bousquet, Steve Hanneke, Shay Moran, Ramon van Handel, Amir Yehudayoff

How quickly can a given class of concepts be learned from examples?

no code implementations • 5 Nov 2020 • Mahdi Haghifam, Gintare Karolina Dziugaite, Shay Moran, Daniel M. Roy

We provide a negative resolution to a conjecture of Steinke and Zakynthinou (2020a), by showing that their bound on the conditional mutual information (CMI) of proper learners of Vapnik--Chervonenkis (VC) classes cannot be improved from $d \log n +2$ to $O(d)$, where $n$ is the number of i. i. d.

no code implementations • NeurIPS 2020 • Raef Bassily, Shay Moran, Anupama Nandi

Inspired by the above example, we consider a model in which the population $\mathcal{D}$ is a mixture of two sub-populations: a private sub-population $\mathcal{D}_{\sf priv}$ of private and sensitive data, and a public sub-population $\mathcal{D}_{\sf pub}$ of data with no privacy concerns.

no code implementations • NeurIPS 2020 • Roi Livni, Shay Moran

PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester ('98).

no code implementations • 24 May 2020 • Olivier Bousquet, Steve Hanneke, Shay Moran, Nikita Zhivotovskiy

It has been recently shown by Hanneke (2016) that the optimal sample complexity of PAC learning for any VC class C is achieved by a particular improper learning algorithm, which outputs a specific majority-vote of hypotheses in C. This leaves the question of when this bound can be achieved by proper learning algorithms, which are restricted to always output a hypothesis from C. In this paper we aim to characterize the classes for which the optimal sample complexity can be achieved by a proper learning algorithm.

no code implementations • ICML 2020 • Raef Bassily, Albert Cheu, Shay Moran, Aleksandar Nikolov, Jonathan Ullman, Zhiwei Steven Wu

In comparison, with only private samples, this problem cannot be solved even for simple query classes with VC-dimension one, and without any private samples, a larger public sample of size $d/\alpha^2$ is needed.

no code implementations • 10 Mar 2020 • Noga Alon, Amos Beimel, Shay Moran, Uri Stemmer

Let~$\cH$ be a class of boolean functions and consider a {\it composed class} $\cH'$ that is derived from~$\cH$ using some arbitrary aggregation rule (for example, $\cH'$ may be the class of all 3-wise majority-votes of functions in $\cH$).

no code implementations • NeurIPS 2020 • Nataly Brukhim, Xinyi Chen, Elad Hazan, Shay Moran

Boosting is a widely used machine learning approach based on the idea of aggregating weak learning rules.

no code implementations • 1 Mar 2020 • Mark Bun, Roi Livni, Shay Moran

We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm.

1 code implementation • 31 Jan 2020 • Noga Alon, Alon Gonen, Elad Hazan, Shay Moran

(ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class?

no code implementations • NeurIPS 2019 • Noga Alon, Raef Bassily, Shay Moran

We consider learning problems where the training set consists of two types of examples: private and public.

no code implementations • 8 Sep 2019 • Mark Braverman, Gillat Kol, Shay Moran, Raghuvansh R. Saxena

For Convex Set Disjointness (and the equivalent task of distributed LP feasibility) we derive upper and lower bounds of $\tilde O(d^2\log n)$ and~$\Omega(d\log n)$.

1 code implementation • NeurIPS 2019 • Akshay Balsubramani, Sanjoy Dasgupta, Yoav Freund, Shay Moran

We introduce a variant of the $k$-nearest neighbor classifier in which $k$ is chosen adaptively for each query, rather than supplied as a parameter.

no code implementations • NeurIPS 2019 • Alon Gonen, Elad Hazan, Shay Moran

We study the relationship between the notions of differentially private learning and online learning in games.

no code implementations • 27 Feb 2019 • Amos Beimel, Shay Moran, Kobbi Nissim, Uri Stemmer

The building block for this learner is a differentially private algorithm for locating an approximate center point of $m>\mathrm{poly}(d, 2^{\log^*|X|})$ points -- a high dimensional generalization of the median function.

no code implementations • NeurIPS 2019 • Alon Cohen, Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Shay Moran

(ii) In the second variant it is assumed that before the process starts, the algorithm has an access to a training set of $n$ items drawn independently from the same unknown distribution (e. g.\ data of candidates from previous recruitment seasons).

no code implementations • 10 Feb 2019 • Olivier Bousquet, Daniel Kane, Shay Moran

We complement and extend this result by showing that: (i) the factor 3 can not be improved if one restricts the algorithm to output a density from $\mathcal{Q}$, and (ii) if one allows the algorithm to output arbitrary densities (e. g.\ a mixture of densities from $\mathcal{Q}$), then the approximation factor can be reduced to 2, which is optimal.

no code implementations • 9 Feb 2019 • Olivier Bousquet, Roi Livni, Shay Moran

We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient).

no code implementations • Nature Machine Intelligence 2019 • Shai Ben-David, Pavel Hrubeš, Shay Moran, Amir Shpilka, Amir Yehudayoff

We show that, in some cases, a solution to the ‘estimating the maximum’ problem is equivalent to the continuum hypothesis.

no code implementations • 5 Dec 2018 • Jérémie Chalopin, Victor Chepoi, Shay Moran, Manfred K. Warmuth

On the positive side we present a new construction of an unlabeled sample compression scheme for maximum classes.

no code implementations • 14 Jun 2018 • Shay Moran, Ido Nachum, Itai Panasoff, Amir Yehudayoff

We study and provide exposition to several phenomena that are related to the perceptron's compression.

no code implementations • 14 Jun 2018 • Zeev Dvir, Shay Moran

We show that any family of subsets $A\subseteq 2^{[n]}$ satisfies $\lvert A\rvert \leq O\bigl(n^{\lceil{d}/{2}\rceil}\bigr)$, where $d$ is the VC dimension of $\{S\triangle T \,\vert\, S, T\in A\}$, and $\triangle$ is the symmetric difference operator.

no code implementations • 4 Jun 2018 • Noga Alon, Roi Livni, Maryanthe Malliaris, Shay Moran

We show that every approximately differentially private learning algorithm (possibly improper) for a class $H$ with Littlestone dimension~$d$ requires $\Omega\bigl(\log^*(d)\bigr)$ examples.

no code implementations • 16 Nov 2017 • Daniel M. Kane, Roi Livni, Shay Moran, Amir Yehudayoff

To naturally fit into the framework of learning theory, the players can send each other examples (as well as bits) where each example/bit costs one unit of communication.

no code implementations • 14 Nov 2017 • Shai Ben-David, Pavel Hrubes, Shay Moran, Amir Shpilka, Amir Yehudayoff

We consider the following statistical estimation problem: given a family F of real valued functions over some domain X and an i. i. d.

no code implementations • 14 Oct 2017 • Raef Bassily, Shay Moran, Ido Nachum, Jonathan Shafer, Amir Yehudayoff

We discuss an approach that allows us to prove upper bounds on the amount of information that algorithms reveal about their inputs, and also provide a lower bound by showing a simple concept class for which every (possibly randomized) empirical risk minimizer must reveal a lot of information.

no code implementations • NeurIPS 2017 • Noga Alon, Moshe Babaioff, Yannai A. Gonczarowski, Yishay Mansour, Shay Moran, Amir Yehudayoff

In this work we derive a variant of the classic Glivenko-Cantelli Theorem, which asserts uniform convergence of the empirical Cumulative Distribution Function (CDF) to the CDF of the underlying distribution.

no code implementations • 4 May 2017 • Daniel M. Kane, Shachar Lovett, Shay Moran

We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry.

no code implementations • 11 Apr 2017 • Daniel M. Kane, Shachar Lovett, Shay Moran, Jiapeng Zhang

We identify a combinatorial dimension, called the \emph{inference dimension}, that captures the query complexity when each additional query is determined by $O(1)$ examples (such as comparison queries, each of which is determined by the two compared examples).

no code implementations • NeurIPS 2016 • Ofir David, Shay Moran, Amir Yehudayoff

This work continues the study of the relationship between sample compression schemes and statistical learning, which has been mostly investigated within the framework of binary classification.

no code implementations • 5 Nov 2016 • Yuval Dagan, Yuval Filmus, Ariel Gabizon, Shay Moran

An optimal strategy for the "20 questions" game is given by a Huffman code for $\pi$: Bob's questions reveal the codeword for $x$ bit by bit.

no code implementations • 12 Oct 2016 • Ofir David, Shay Moran, Amir Yehudayoff

(iv) A dichotomy for sample compression in multiclass categorization problems: If a non-trivial compression exists then a compression of logarithmic size exists.

no code implementations • 30 May 2015 • Shay Moran, Manfred K. Warmuth

We consider a generalization of maximum classes called extremal classes.

no code implementations • 24 Mar 2015 • Shay Moran, Amir Yehudayoff

Sample compression schemes were defined by Littlestone and Warmuth (1986) as an abstraction of the structure underlying many learning algorithms.

no code implementations • 22 Feb 2015 • Shay Moran, Amir Shpilka, Avi Wigderson, Amir Yehudayoff

We further construct sample compression schemes of size $k$ for $C$, with additional information of $k \log(k)$ bits.

Cannot find the paper you are looking for? You can
Submit a new open access paper.

Contact us on:
hello@paperswithcode.com
.
Papers With Code is a free resource with all data licensed under CC-BY-SA.