no code implementations • NeurIPS 2023 • Shai Ben-David, Alex Bie, Gautam Kamath, Tosca Lechner
We examine the relationship between learnability and robust (or agnostic) learnability for the problem of distribution learning.
no code implementations • 20 Nov 2023 • Eli Verwimp, Rahaf Aljundi, Shai Ben-David, Matthias Bethge, Andrea Cossu, Alexander Gepperth, Tyler L. Hayes, Eyke Hüllermeier, Christopher Kanan, Dhireesha Kudithipudi, Christoph H. Lampert, Martin Mundt, Razvan Pascanu, Adrian Popescu, Andreas S. Tolias, Joost Van de Weijer, Bing Liu, Vincenzo Lomonaco, Tinne Tuytelaars, Gido M. van de Ven
Continual learning is a subfield of machine learning, which aims to allow machine learning models to continuously learn on new data, by accumulating knowledge without forgetting what was learned in the past.
no code implementations • 8 Feb 2023 • Niki Hasrati, Shai Ben-David
Finally, we consider online learning with no requirements for optimality, and show, under a weaker notion of computability, that the finiteness of the Littlestone dimension no longer characterizes whether a class is c-online learnable with finite mistake bound.
no code implementations • 7 Jul 2021 • Tosca Lechner, Shai Ben-David, Sushant Agarwal, Nivasini Ananthakrishnan
The goal of such representations is that a model trained on data under the representation (e. g., a classifier) will be guaranteed to respect some fairness constraints.
no code implementations • NeurIPS 2021 • Tosca Lechner, Nivasini Ananthakrishnan, Sushant Agarwal, Shai Ben-David
With the growing awareness to fairness in machine learning and the realization of the central role that data representation has in data processing tasks, there is an obvious interest in notions of fair data representations.
no code implementations • 26 Oct 2020 • Gintare Karolina Dziugaite, Shai Ben-David, Daniel M. Roy
We then model the act of enforcing interpretability as that of performing empirical risk minimization over the set of interpretable hypotheses.
no code implementations • 31st International Conference on Algorithmic Learning Theory 2020 • Sushant Agarwal, Nivasini Ananthakrishnan, Shai Ben-David, Tosca Lechner, Ruth Urner
We initiate a study of learning with computable learners and computable output predictors.
no code implementations • 28 May 2019 • Christina Göpfert, Shai Ben-David, Olivier Bousquet, Sylvain Gelly, Ilya Tolstikhin, Ruth Urner
In semi-supervised classification, one is given access both to labeled and unlabeled data.
no code implementations • Nature Machine Intelligence 2019 • Shai Ben-David, Pavel Hrubeš, Shay Moran, Amir Shpilka, Amir Yehudayoff
We show that, in some cases, a solution to the ‘estimating the maximum’ problem is equivalent to the continuum hypothesis.
no code implementations • NeurIPS 2018 • Hassan Ashtiani, Shai Ben-David, Nicholas Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan
We prove that ϴ(k d^2 / ε^2) samples are necessary and sufficient for learning a mixture of k Gaussians in R^d, up to error ε in total variation distance.
no code implementations • 10 Oct 2018 • Shrinu Kushagra, Shai Ben-David, Ihab Ilyas
In this work, we view de-duplication as a clustering problem where the goal is to put records corresponding to the same physical entity in the same cluster and putting records corresponding to different physical entities into different clusters.
no code implementations • 22 May 2018 • Shai Ben-David
Unsupervised learning is widely recognized as one of the most important challenges facing machine learning nowa- days.
2 code implementations • NeurIPS 2018 • Michele Donini, Luca Oneto, Shai Ben-David, John Shawe-Taylor, Massimiliano Pontil
It encourages the conditional risk of the learned classifier to be approximately constant with respect to the sensitive variable.
no code implementations • 30 Nov 2017 • Shrinu Kushagra, Yao-Liang Yu, Shai Ben-David
We focus on the $k$-means objective and we prove that the regularised version of $k$-means is NP-Hard even for $k=1$.
no code implementations • 14 Nov 2017 • Shai Ben-David, Pavel Hrubes, Shay Moran, Amir Shpilka, Amir Yehudayoff
We consider the following statistical estimation problem: given a family F of real valued functions over some domain X and an i. i. d.
no code implementations • 14 Oct 2017 • Hassan Ashtiani, Shai Ben-David, Nick Harvey, Christopher Liaw, Abbas Mehrabian, Yaniv Plan
We prove that $\tilde{\Theta}(k d^2 / \varepsilon^2)$ samples are necessary and sufficient for learning a mixture of $k$ Gaussians in $\mathbb{R}^d$, up to error $\varepsilon$ in total variation distance.
no code implementations • 6 Jun 2017 • Hassan Ashtiani, Shai Ben-David, Abbas Mehrabian
Let $\mathcal F$ be an arbitrary class of probability distributions, and let $\mathcal{F}^k$ denote the class of $k$-mixtures of elements of $\mathcal F$.
no code implementations • NeurIPS 2016 • Hassan Ashtiani, Shrinu Kushagra, Shai Ben-David
We show that there is a trade off between computational complexity and query complexity; We prove that for the case of $k$-means clustering (i. e., when the expert conforms to a solution of $k$-means), having access to relatively few such queries allows efficient solutions to otherwise NP hard problems.
no code implementations • 21 Feb 2016 • Anastasia Pentina, Shai Ben-David
We consider a problem of learning kernels for use in SVM classification in the multi-task and lifelong scenarios and provide generalization bounds on the error of a large margin classifier.
no code implementations • 19 Oct 2015 • Shai Ben-David
I wish to present a a critical bird's eye overview of the results published on this issue so far and to call attention to the gap between available and desirable results on this issue.
no code implementations • 19 Jul 2015 • Shai Ben-David
Arguably the most influential consequence of the VC analysis is the fundamental theorem of statistical machine learning, stating that a concept class is learnable (in some precise sense) if and only if its VC-dimension is finite.
no code implementations • 19 Jun 2015 • Hassan Ashtiani, Shai Ben-David
The algorithm designer then uses that sample to come up with a data representation under which $k$-means clustering results in a clustering (of the full data set) that is aligned with the user's clustering.
no code implementations • 2 Jan 2015 • Shai Ben-David
In particular, we list some implied requirements for notions of clusterability.
no code implementations • 13 Aug 2013 • Amit Daniely, Sivan Sabato, Shai Ben-David, Shai Shalev-Shwartz
We study the sample complexity of multiclass prediction in several learning settings.
no code implementations • 8 Sep 2011 • Margareta Ackerman, Shai Ben-David, Simina Brânzei, David Loker
One of the most prominent challenges in clustering is "the user's dilemma," which is the problem of selecting an appropriate clustering algorithm for a specific task.
no code implementations • NeurIPS 2010 • Margareta Ackerman, Shai Ben-David, David Loker
We propose to address this problem by distilling abstract properties of the input-output behavior of different clustering paradigms.
no code implementations • NeurIPS 2008 • Shai Ben-David, Margareta Ackerman
In this respect, we follow up on the work of Kelinberg, (Kleinberg) that showed an impossibility result for such axiomatization.