Search Results for author: Shay Moran

Found 71 papers, 3 papers with code

Teaching and compressing for low VC-dimension

no code implementations22 Feb 2015 Shay Moran, Amir Shpilka, Avi Wigderson, Amir Yehudayoff

We further construct sample compression schemes of size $k$ for $C$, with additional information of $k \log(k)$ bits.

Sample compression schemes for VC classes

no code implementations24 Mar 2015 Shay Moran, Amir Yehudayoff

Sample compression schemes were defined by Littlestone and Warmuth (1986) as an abstraction of the structure underlying many learning algorithms.

Labeled compression schemes for extremal classes

no code implementations30 May 2015 Shay Moran, Manfred K. Warmuth

We consider a generalization of maximum classes called extremal classes.

On statistical learning via the lens of compression

no code implementations12 Oct 2016 Ofir David, Shay Moran, Amir Yehudayoff

(iv) A dichotomy for sample compression in multiclass categorization problems: If a non-trivial compression exists then a compression of logarithmic size exists.

Binary Classification Learning Theory

Twenty (simple) questions

no code implementations5 Nov 2016 Yuval Dagan, Yuval Filmus, Ariel Gabizon, Shay Moran

An optimal strategy for the "20 questions" game is given by a Huffman code for $\pi$: Bob's questions reveal the codeword for $x$ bit by bit.

Supervised learning through the lens of compression

no code implementations NeurIPS 2016 Ofir David, Shay Moran, Amir Yehudayoff

This work continues the study of the relationship between sample compression schemes and statistical learning, which has been mostly investigated within the framework of binary classification.

Binary Classification

Active classification with comparison queries

no code implementations11 Apr 2017 Daniel M. Kane, Shachar Lovett, Shay Moran, Jiapeng Zhang

We identify a combinatorial dimension, called the \emph{inference dimension}, that captures the query complexity when each additional query is determined by $O(1)$ examples (such as comparison queries, each of which is determined by the two compared examples).

Active Learning Classification +1

Near-optimal linear decision trees for k-SUM and related problems

no code implementations4 May 2017 Daniel M. Kane, Shachar Lovett, Shay Moran

We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry.

2k

Submultiplicative Glivenko-Cantelli and Uniform Convergence of Revenues

no code implementations NeurIPS 2017 Noga Alon, Moshe Babaioff, Yannai A. Gonczarowski, Yishay Mansour, Shay Moran, Amir Yehudayoff

In this work we derive a variant of the classic Glivenko-Cantelli Theorem, which asserts uniform convergence of the empirical Cumulative Distribution Function (CDF) to the CDF of the underlying distribution.

Learners that Use Little Information

no code implementations14 Oct 2017 Raef Bassily, Shay Moran, Ido Nachum, Jonathan Shafer, Amir Yehudayoff

We discuss an approach that allows us to prove upper bounds on the amount of information that algorithms reveal about their inputs, and also provide a lower bound by showing a simple concept class for which every (possibly randomized) empirical risk minimizer must reveal a lot of information.

A learning problem that is independent of the set theory ZFC axioms

no code implementations14 Nov 2017 Shai Ben-David, Pavel Hrubes, Shay Moran, Amir Shpilka, Amir Yehudayoff

We consider the following statistical estimation problem: given a family F of real valued functions over some domain X and an i. i. d.

General Classification PAC learning

On Communication Complexity of Classification Problems

no code implementations16 Nov 2017 Daniel M. Kane, Roi Livni, Shay Moran, Amir Yehudayoff

To naturally fit into the framework of learning theory, the players can send each other examples (as well as bits) where each example/bit costs one unit of communication.

BIG-bench Machine Learning Classification +2

Private PAC learning implies finite Littlestone dimension

no code implementations4 Jun 2018 Noga Alon, Roi Livni, Maryanthe Malliaris, Shay Moran

We show that every approximately differentially private learning algorithm (possibly improper) for a class $H$ with Littlestone dimension~$d$ requires $\Omega\bigl(\log^*(d)\bigr)$ examples.

Open-Ended Question Answering PAC learning

On the Perceptron's Compression

no code implementations14 Jun 2018 Shay Moran, Ido Nachum, Itai Panasoff, Amir Yehudayoff

We study and provide exposition to several phenomena that are related to the perceptron's compression.

A Sauer-Shelah-Perles Lemma for Sumsets

no code implementations14 Jun 2018 Zeev Dvir, Shay Moran

We show that any family of subsets $A\subseteq 2^{[n]}$ satisfies $\lvert A\rvert \leq O\bigl(n^{\lceil{d}/{2}\rceil}\bigr)$, where $d$ is the VC dimension of $\{S\triangle T \,\vert\, S, T\in A\}$, and $\triangle$ is the symmetric difference operator.

LEMMA

Learnability can be undecidable

no code implementations Nature Machine Intelligence 2019 Shai Ben-David, Pavel Hrubeš, Shay Moran, Amir Shpilka, Amir Yehudayoff

We show that, in some cases, a solution to the ‘estimating the maximum’ problem is equivalent to the continuum hypothesis.

BIG-bench Machine Learning PAC learning

Synthetic Data Generators: Sequential and Private

no code implementations9 Feb 2019 Olivier Bousquet, Roi Livni, Shay Moran

We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient).

Synthetic Data Generation

The Optimal Approximation Factor in Density Estimation

no code implementations10 Feb 2019 Olivier Bousquet, Daniel Kane, Shay Moran

We complement and extend this result by showing that: (i) the factor 3 can not be improved if one restricts the algorithm to output a density from $\mathcal{Q}$, and (ii) if one allows the algorithm to output arbitrary densities (e. g.\ a mixture of densities from $\mathcal{Q}$), then the approximation factor can be reduced to 2, which is optimal.

Density Estimation

Learning to Screen

no code implementations NeurIPS 2019 Alon Cohen, Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Shay Moran

(ii) In the second variant it is assumed that before the process starts, the algorithm has an access to a training set of $n$ items drawn independently from the same unknown distribution (e. g.\ data of candidates from previous recruitment seasons).

Private Center Points and Learning of Halfspaces

no code implementations27 Feb 2019 Amos Beimel, Shay Moran, Kobbi Nissim, Uri Stemmer

The building block for this learner is a differentially private algorithm for locating an approximate center point of $m>\mathrm{poly}(d, 2^{\log^*|X|})$ points -- a high dimensional generalization of the median function.

Private Learning Implies Online Learning: An Efficient Reduction

no code implementations NeurIPS 2019 Alon Gonen, Elad Hazan, Shay Moran

We study the relationship between the notions of differentially private learning and online learning in games.

Open-Ended Question Answering

An adaptive nearest neighbor rule for classification

1 code implementation NeurIPS 2019 Akshay Balsubramani, Sanjoy Dasgupta, Yoav Freund, Shay Moran

We introduce a variant of the $k$-nearest neighbor classifier in which $k$ is chosen adaptively for each query, rather than supplied as a parameter.

Classification General Classification +1

Convex Set Disjointness, Distributed Learning of Halfspaces, and LP Feasibility

no code implementations8 Sep 2019 Mark Braverman, Gillat Kol, Shay Moran, Raghuvansh R. Saxena

For Convex Set Disjointness (and the equivalent task of distributed LP feasibility) we derive upper and lower bounds of $\tilde O(d^2\log n)$ and~$\Omega(d\log n)$.

Distributed Optimization LEMMA

Limits of Private Learning with Access to Public Data

no code implementations NeurIPS 2019 Noga Alon, Raef Bassily, Shay Moran

We consider learning problems where the training set consists of two types of examples: private and public.

Boosting Simple Learners

1 code implementation31 Jan 2020 Noga Alon, Alon Gonen, Elad Hazan, Shay Moran

(ii) Expressivity: Which tasks can be learned by boosting weak hypotheses from a bounded VC class?

An Equivalence Between Private Classification and Online Prediction

no code implementations1 Mar 2020 Mark Bun, Roi Livni, Shay Moran

We prove that every concept class with finite Littlestone dimension can be learned by an (approximate) differentially-private algorithm.

Classification General Classification +1

Online Agnostic Boosting via Regret Minimization

no code implementations NeurIPS 2020 Nataly Brukhim, Xinyi Chen, Elad Hazan, Shay Moran

Boosting is a widely used machine learning approach based on the idea of aggregating weak learning rules.

Closure Properties for Private Classification and Online Prediction

no code implementations10 Mar 2020 Noga Alon, Amos Beimel, Shay Moran, Uri Stemmer

Let~$\cH$ be a class of boolean functions and consider a {\it composed class} $\cH'$ that is derived from~$\cH$ using some arbitrary aggregation rule (for example, $\cH'$ may be the class of all 3-wise majority-votes of functions in $\cH$).

Classification General Classification +1

Private Query Release Assisted by Public Data

no code implementations ICML 2020 Raef Bassily, Albert Cheu, Shay Moran, Aleksandar Nikolov, Jonathan Ullman, Zhiwei Steven Wu

In comparison, with only private samples, this problem cannot be solved even for simple query classes with VC-dimension one, and without any private samples, a larger public sample of size $d/\alpha^2$ is needed.

Proper Learning, Helly Number, and an Optimal SVM Bound

no code implementations24 May 2020 Olivier Bousquet, Steve Hanneke, Shay Moran, Nikita Zhivotovskiy

It has been recently shown by Hanneke (2016) that the optimal sample complexity of PAC learning for any VC class C is achieved by a particular improper learning algorithm, which outputs a specific majority-vote of hypotheses in C. This leaves the question of when this bound can be achieved by proper learning algorithms, which are restricted to always output a hypothesis from C. In this paper we aim to characterize the classes for which the optimal sample complexity can be achieved by a proper learning algorithm.

PAC learning

A Limitation of the PAC-Bayes Framework

no code implementations NeurIPS 2020 Roi Livni, Shay Moran

PAC-Bayes is a useful framework for deriving generalization bounds which was introduced by McAllester ('98).

Generalization Bounds

Learning from Mixtures of Private and Public Populations

no code implementations NeurIPS 2020 Raef Bassily, Shay Moran, Anupama Nandi

Inspired by the above example, we consider a model in which the population $\mathcal{D}$ is a mixture of two sub-populations: a private sub-population $\mathcal{D}_{\sf priv}$ of private and sensitive data, and a public sub-population $\mathcal{D}_{\sf pub}$ of data with no privacy concerns.

PAC learning

On the Information Complexity of Proper Learners for VC Classes in the Realizable Case

no code implementations5 Nov 2020 Mahdi Haghifam, Gintare Karolina Dziugaite, Shay Moran, Daniel M. Roy

We provide a negative resolution to a conjecture of Steinke and Zakynthinou (2020a), by showing that their bound on the conditional mutual information (CMI) of proper learners of Vapnik--Chervonenkis (VC) classes cannot be improved from $d \log n +2$ to $O(d)$, where $n$ is the number of i. i. d.

Synthetic Data Generators -- Sequential and Private

no code implementations NeurIPS 2020 Olivier Bousquet, Roi Livni, Shay Moran

We study the sample complexity of private synthetic data generation over an unbounded sized class of statistical queries, and show that any class that is privately proper PAC learnable admits a private synthetic data generator (perhaps non-efficient).

Synthetic Data Generation

Adversarial Laws of Large Numbers and Optimal Regret in Online Classification

no code implementations22 Jan 2021 Noga Alon, Omri Ben-Eliezer, Yuval Dagan, Shay Moran, Moni Naor, Eylon Yogev

Laws of large numbers guarantee that given a large enough sample from some population, the measure of any fixed sub-population is well-estimated by its frequency in the sample.

General Classification Open-Ended Question Answering +1

Online Learning with Simple Predictors and a Combinatorial Characterization of Minimax in 0/1 Games

no code implementations2 Feb 2021 Steve Hanneke, Roi Livni, Shay Moran

More precisely, given any concept class C and any hypothesis class H, we provide nearly tight bounds (up to a log factor) on the optimal mistake bounds for online learning C using predictors from H. Our bound yields an exponential improvement over the previously best known bound by Chase and Freitag (2020).

A Theory of PAC Learnability of Partial Concept Classes

no code implementations18 Jul 2021 Noga Alon, Steve Hanneke, Ron Holzman, Shay Moran

In fact we exhibit easy-to-learn partial concept classes which provably cannot be captured by the traditional PAC theory.

PAC learning

Agnostic Online Learning and Excellent Sets

no code implementations12 Aug 2021 Maryanthe Malliaris, Shay Moran

We use algorithmic methods from online learning to revisit a key idea from the interaction of model theory and combinatorics, the existence of large "indivisible" sets, called "$\epsilon$-excellent," in $k$-edge stable graphs (equivalently, Littlestone classes).

LEMMA

Statistically Near-Optimal Hypothesis Selection

no code implementations17 Aug 2021 Olivier Bousquet, Mark Braverman, Klim Efremenko, Gillat Kol, Shay Moran

We derive an optimal $2$-approximation learning strategy for the Hypothesis Selection problem, outputting $q$ such that $\mathsf{TV}(p, q) \leq2 \cdot opt + \eps$, with a (nearly) optimal sample complexity of~$\tilde O(\log n/\epsilon^2)$.

PAC learning

Towards a Unified Information-Theoretic Framework for Generalization

no code implementations NeurIPS 2021 Mahdi Haghifam, Gintare Karolina Dziugaite, Shay Moran, Daniel M. Roy

We further show that an inherent limitation of proper learning of VC classes contradicts the existence of a proper learner with constant CMI, and it implies a negative resolution to an open problem of Steinke and Zakynthinou (2020).

Generalization Bounds

Uniform Brackets, Containers, and Combinatorial Macbeath Regions

no code implementations19 Nov 2021 Kunal Dutta, Arijit Ghosh, Shay Moran

We study the connections between three seemingly different combinatorial structures - "uniform" brackets in statistics and probability theory, "containers" in online and distributed learning theory, and "combinatorial Macbeath regions", or Mnets in discrete and computational geometry.

Learning Theory

Multiclass Boosting and the Cost of Weak Learning

no code implementations NeurIPS 2021 Nataly Brukhim, Elad Hazan, Shay Moran, Indraneel Mukherjee, Robert E. Schapire

Here, we focus on an especially natural formulation in which the weak hypotheses are assumed to belong to an ''easy-to-learn'' base class, and the weak learner is an agnostic PAC learner for that class with respect to the standard classification loss.

Monotone Learning

no code implementations10 Feb 2022 Olivier Bousquet, Amit Daniely, Haim Kaplan, Yishay Mansour, Shay Moran, Uri Stemmer

Our transformation readily implies monotone learners in a variety of contexts: for example it extends Pestov's result to classification tasks with an arbitrary number of labels.

Binary Classification Classification +1

A Characterization of Multiclass Learnability

no code implementations3 Mar 2022 Nataly Brukhim, Daniel Carmon, Irit Dinur, Shay Moran, Amir Yehudayoff

This work resolves this problem: we characterize multiclass PAC learnability through the DS dimension, a combinatorial dimension defined by Daniely and Shalev-Shwartz (2014).

Learning Theory Open-Ended Question Answering +1

Active Learning with Label Comparisons

no code implementations10 Apr 2022 Gal Yona, Shay Moran, Gal Elidan, Amir Globerson

We show that there is a natural class where this approach is sub-optimal, and that there is a more comparison-efficient active learning scheme.

Active Learning

A Resilient Distributed Boosting Algorithm

no code implementations9 Jun 2022 Yuval Filmus, Idan Mehalel, Shay Moran

Given a learning task where the data is distributed among several parties, communication is one of the fundamental resources which the parties would like to minimize.

LEMMA

Understanding Generalization via Leave-One-Out Conditional Mutual Information

no code implementations29 Jun 2022 Mahdi Haghifam, Shay Moran, Daniel M. Roy, Gintare Karolina Dziugaite

These leave-one-out variants of the conditional mutual information (CMI) of an algorithm (Steinke and Zakynthinou, 2020) are also seen to control the mean generalization error of learning algorithms with bounded loss functions.

Transductive Learning

Integral Probability Metrics PAC-Bayes Bounds

1 code implementation1 Jul 2022 Ron Amit, Baruch Epstein, Shay Moran, Ron Meir

We present a PAC-Bayes-style generalization bound which enables the replacement of the KL-divergence with a variety of Integral Probability Metrics (IPM).

Generalization Bounds Style Generalization

Fine-Grained Distribution-Dependent Learning Curves

no code implementations31 Aug 2022 Olivier Bousquet, Steve Hanneke, Shay Moran, Jonathan Shafer, Ilya Tolstikhin

We solve this problem in a principled manner, by introducing a combinatorial dimension called VCL that characterizes the best $d'$ for which $d'/n$ is a strong minimax lower bound.

Learning Theory PAC learning

On Optimal Learning Under Targeted Data Poisoning

no code implementations6 Oct 2022 Steve Hanneke, Amin Karbasi, Mohammad Mahmoody, Idan Mehalel, Shay Moran

In this work we aim to characterize the smallest achievable error $\epsilon=\epsilon(\eta)$ by the learner in the presence of such an adversary in both realizable and agnostic settings.

Data Poisoning

The unstable formula theorem revisited via algorithms

no code implementations9 Dec 2022 Maryanthe Malliaris, Shay Moran

This paper is about the surprising interaction of a foundational result from model theory about stability of theories, which seems to be inherently about the infinite, with algorithmic stability in learning.

On Differentially Private Online Predictions

no code implementations27 Feb 2023 Haim Kaplan, Yishay Mansour, Shay Moran, Kobbi Nissim, Uri Stemmer

In this work we introduce an interactive variant of joint differential privacy towards handling online processes in which existing privacy definitions seem too restrictive.

Optimal Prediction Using Expert Advice and Randomized Littlestone Dimension

no code implementations27 Feb 2023 Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran

We prove an analogous result for randomized learners: we show that the optimal expected mistake bound in learning a class $\mathcal{H}$ equals its randomized Littlestone dimension, which is the largest $d$ for which there exists a tree shattered by $\mathcal{H}$ whose average depth is $2d$.

2k Open-Ended Question Answering

List Online Classification

no code implementations27 Mar 2023 Shay Moran, Ohad Sharon, Iska Tsubari, Sivan Yosebashvili

This dimension is a variation of the classical Littlestone dimension with the difference that binary mistake trees are replaced with $(k+1)$-ary mistake trees, where $k$ is the number of labels in the list.

Classification LEMMA

Multiclass Online Learning and Uniform Convergence

no code implementations30 Mar 2023 Steve Hanneke, Shay Moran, Vinod Raman, Unique Subedi, Ambuj Tewari

We argue that the best expert has regret at most Littlestone dimension relative to the best concept in the class.

Binary Classification

Replicability and stability in learning

no code implementations7 Apr 2023 Zachary Chase, Shay Moran, Amir Yehudayoff

Impagliazzo et al. showed how to boost any replicable algorithm so that it produces the same output with probability arbitrarily close to 1.

A Unified Characterization of Private Learnability via Graph Theory

no code implementations8 Apr 2023 Noga Alon, Shay Moran, Hilla Schefler, Amir Yehudayoff

Learning $\mathcal{H}$ under pure DP is captured by the fractional clique number of $G$.

Statistical Indistinguishability of Learning Algorithms

no code implementations23 May 2023 Alkis Kalavasis, Amin Karbasi, Shay Moran, Grigoris Velegkas

When two different parties use the same learning rule on their own data, how can we test whether the distributions of the two outcomes are similar?

Can Copyright be Reduced to Privacy?

no code implementations24 May 2023 Niva Elkin-Koren, Uri Hacohen, Roi Livni, Shay Moran

In this work, we examine whether such algorithmic stability techniques are suitable to ensure the responsible use of generative models without inadvertently violating copyright laws.

Adversarial Resilience in Sequential Prediction via Abstention

no code implementations NeurIPS 2023 Surbhi Goel, Steve Hanneke, Shay Moran, Abhishek Shetty

We study the problem of sequential prediction in the stochastic setting with an adversary that is allowed to inject clean-label adversarial (or out-of-distribution) examples.

Universal Rates for Multiclass Learning

no code implementations5 Jul 2023 Steve Hanneke, Shay Moran, Qian Zhang

Pseudo-cubes are a structure, rooted in the work of Daniely and Shalev-Shwartz (2014), and recently shown by Brukhim, Carmon, Dinur, Moran, and Yehudayoff (2022) to characterize PAC learnability (i. e., uniform rates) for multiclass classification.

Binary Classification

The Bayesian Stability Zoo

no code implementations NeurIPS 2023 Shay Moran, Hilla Schefler, Jonathan Shafer

We show that many definitions of stability found in the learning theory literature are equivalent to one another.

Learning Theory

Local Borsuk-Ulam, Stability, and Replicability

no code implementations2 Nov 2023 Zachary Chase, Bogdan Chornomaz, Shay Moran, Amir Yehudayoff

To offer a broader and more comprehensive view of our topological approach, we prove a local variant of the Borsuk-Ulam theorem in topology and a result in combinatorics concerning Kneser colorings.

A Trichotomy for Transductive Online Learning

no code implementations NeurIPS 2023 Steve Hanneke, Shay Moran, Jonathan Shafer

We present new upper and lower bounds on the number of learner mistakes in the `transductive' online learning setting of Ben-David, Kushilevitz and Mansour (1997).

Bandit-Feedback Online Multiclass Classification: Variants and Tradeoffs

no code implementations12 Feb 2024 Yuval Filmus, Steve Hanneke, Idan Mehalel, Shay Moran

We demonstrate that the optimal mistake bound under bandit feedback is at most $O(k)$ times higher than the optimal mistake bound in the full information case, where $k$ represents the number of labels.

Classification

Learnability Gaps of Strategic Classification

no code implementations29 Feb 2024 Lee Cohen, Yishay Mansour, Shay Moran, Han Shao

We essentially show that any learnable class is also strategically learnable: we first consider a fully informative setting, where the manipulation structure (which is modeled by a manipulation graph $G^\star$) is known and during training time the learner has access to both the pre-manipulation data and post-manipulation data.

Classification Multi-Label Learning

Learning-Augmented Algorithms with Explicit Predictors

no code implementations12 Mar 2024 Marek Elias, Haim Kaplan, Yishay Mansour, Shay Moran

Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.

Scheduling

List Sample Compression and Uniform Convergence

no code implementations16 Mar 2024 Steve Hanneke, Shay Moran, Tom Waknine

In classical PAC learning, both uniform convergence and sample compression satisfy a form of `completeness': whenever a class is learnable, it can also be learned by a learning rule that adheres to these principles.

PAC learning

Cannot find the paper you are looking for? You can Submit a new open access paper.