no code implementations • 25 Feb 2023 • Daniel Kane, Sihan Liu, Shachar Lovett, Gaurav Mahajan, Csaba Szepesvári, Gellért Weisz
The rewards in this game are chosen such that if the learner achieves large reward, then the learner's actions can be used to simulate solving a variant of 3-SAT, where (a) each variable shows up in a bounded number of clauses (b) if an instance has no solutions then it also has no solutions that satisfy more than (1-$\epsilon$)-fraction of clauses.
no code implementations • 13 Feb 2023 • Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan
We study a foundational variant of Valiant and Vapnik and Chervonenkis' Probably Approximately Correct (PAC)-Learning in which the adversary is restricted to a known family of marginal distributions $\mathscr{P}$.
no code implementations • 11 Feb 2022 • Daniel Kane, Sihan Liu, Shachar Lovett, Gaurav Mahajan
In this work, we make progress on this open problem by presenting the first computational lower bound for RL with linear function approximation: unless NP=RP, no randomized polynomial time algorithm exists for deterministic transition MDPs with a constant number of actions and linear optimal value functions.
no code implementations • 8 Nov 2021 • Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan
The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory.
no code implementations • 19 Mar 2021 • Simon S. Du, Sham M. Kakade, Jason D. Lee, Shachar Lovett, Gaurav Mahajan, Wen Sun, Ruosong Wang
The framework incorporates nearly all existing models in which a polynomial sample complexity is achievable, and, notably, also includes new models, such as the Linear $Q^*/V^*$ model in which both the optimal $Q$-function and the optimal $V$-function are linear in some known feature space.
no code implementations • 9 Feb 2021 • Max Hopkins, Daniel Kane, Shachar Lovett, Michal Moshkovitz
The explosive growth of easily-accessible unlabeled data has lead to growing interest in active learning, a paradigm in which data-hungry learning algorithms adaptively select informative examples in order to lower prohibitively expensive labeling costs.
no code implementations • NeurIPS 2020 • Alon Gonen, Shachar Lovett, Michal Moshkovitz
We propose a candidate solution for the case of realizable strong learning under a known distribution, based on the SQ dimension of neighboring distributions.
no code implementations • 23 Apr 2020 • Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan
Given a finite set $X \subset \mathbb{R}^d$ and a binary linear classifier $c: \mathbb{R}^d \to \{0, 1\}$, how many queries of the form $c(x)$ are required to learn the label of every point in $X$?
no code implementations • 8 Feb 2020 • Alon Gonen, Shachar Lovett, Michal Moshkovitz
In this paper we aim to develop combinatorial dimensions that characterize bounded memory learning.
no code implementations • 15 Jan 2020 • Max Hopkins, Daniel Kane, Shachar Lovett, Gaurav Mahajan
With the explosion of massive, widely available unlabeled data in the past years, finding label and time efficient, robust learning algorithms has become ever more important in theory and in practice.
no code implementations • NeurIPS 2020 • Max Hopkins, Daniel M. Kane, Shachar Lovett
While previous results show that active learning performs no better than its supervised alternative for important concept classes such as linear separators, we show that by adding weak distributional assumptions and allowing comparison queries, active learning requires exponentially fewer samples.
1 code implementation • 3 Aug 2017 • Nikhil Bansal, Daniel Dadush, Shashwat Garg, Shachar Lovett
An important result in discrepancy due to Banaszczyk states that for any set of $n$ vectors in $\mathbb{R}^m$ of $\ell_2$ norm at most $1$ and any convex body $K$ in $\mathbb{R}^m$ of Gaussian measure at least half, there exists a $\pm 1$ combination of these vectors which lies in $5K$.
Data Structures and Algorithms Discrete Mathematics
no code implementations • 4 May 2017 • Daniel M. Kane, Shachar Lovett, Shay Moran
We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry.
no code implementations • 11 Apr 2017 • Daniel M. Kane, Shachar Lovett, Shay Moran, Jiapeng Zhang
We identify a combinatorial dimension, called the \emph{inference dimension}, that captures the query complexity when each additional query is determined by $O(1)$ examples (such as comparison queries, each of which is determined by the two compared examples).