Search Results for author: Shachar Lovett

Found 14 papers, 1 papers with code

Exponential Hardness of Reinforcement Learning with Linear Function Approximation

no code implementations25 Feb 2023 Daniel Kane, Sihan Liu, Shachar Lovett, Gaurav Mahajan, Csaba Szepesvári, Gellért Weisz

The rewards in this game are chosen such that if the learner achieves large reward, then the learner's actions can be used to simulate solving a variant of 3-SAT, where (a) each variable shows up in a bounded number of clauses (b) if an instance has no solutions then it also has no solutions that satisfy more than (1-$\epsilon$)-fraction of clauses.

Learning Theory reinforcement-learning +1

Do PAC-Learners Learn the Marginal Distribution?

no code implementations13 Feb 2023 Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan

We study a foundational variant of Valiant and Vapnik and Chervonenkis' Probably Approximately Correct (PAC)-Learning in which the adversary is restricted to a known family of marginal distributions $\mathscr{P}$.

PAC learning

Computational-Statistical Gaps in Reinforcement Learning

no code implementations11 Feb 2022 Daniel Kane, Sihan Liu, Shachar Lovett, Gaurav Mahajan

In this work, we make progress on this open problem by presenting the first computational lower bound for RL with linear function approximation: unless NP=RP, no randomized polynomial time algorithm exists for deterministic transition MDPs with a constant number of actions and linear optimal value functions.

reinforcement-learning Reinforcement Learning (RL)

Realizable Learning is All You Need

no code implementations8 Nov 2021 Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan

The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory.

Learning Theory PAC learning

Bilinear Classes: A Structural Framework for Provable Generalization in RL

no code implementations19 Mar 2021 Simon S. Du, Sham M. Kakade, Jason D. Lee, Shachar Lovett, Gaurav Mahajan, Wen Sun, Ruosong Wang

The framework incorporates nearly all existing models in which a polynomial sample complexity is achievable, and, notably, also includes new models, such as the Linear $Q^*/V^*$ model in which both the optimal $Q$-function and the optimal $V$-function are linear in some known feature space.

Bounded Memory Active Learning through Enriched Queries

no code implementations9 Feb 2021 Max Hopkins, Daniel Kane, Shachar Lovett, Michal Moshkovitz

The explosive growth of easily-accessible unlabeled data has lead to growing interest in active learning, a paradigm in which data-hungry learning algorithms adaptively select informative examples in order to lower prohibitively expensive labeling costs.

Active Learning

Towards a Combinatorial Characterization of Bounded-Memory Learning

no code implementations NeurIPS 2020 Alon Gonen, Shachar Lovett, Michal Moshkovitz

We propose a candidate solution for the case of realizable strong learning under a known distribution, based on the SQ dimension of neighboring distributions.

PAC learning

Point Location and Active Learning: Learning Halfspaces Almost Optimally

no code implementations23 Apr 2020 Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan

Given a finite set $X \subset \mathbb{R}^d$ and a binary linear classifier $c: \mathbb{R}^d \to \{0, 1\}$, how many queries of the form $c(x)$ are required to learn the label of every point in $X$?

Active Learning Position

Towards a combinatorial characterization of bounded memory learning

no code implementations8 Feb 2020 Alon Gonen, Shachar Lovett, Michal Moshkovitz

In this paper we aim to develop combinatorial dimensions that characterize bounded memory learning.

PAC learning

Noise-tolerant, Reliable Active Classification with Comparison Queries

no code implementations15 Jan 2020 Max Hopkins, Daniel Kane, Shachar Lovett, Gaurav Mahajan

With the explosion of massive, widely available unlabeled data in the past years, finding label and time efficient, robust learning algorithms has become ever more important in theory and in practice.

Active Learning Classification +1

The Power of Comparisons for Actively Learning Linear Classifiers

no code implementations NeurIPS 2020 Max Hopkins, Daniel M. Kane, Shachar Lovett

While previous results show that active learning performs no better than its supervised alternative for important concept classes such as linear separators, we show that by adding weak distributional assumptions and allowing comparison queries, active learning requires exponentially fewer samples.

Active Learning PAC learning

The Gram-Schmidt Walk: A Cure for the Banaszczyk Blues

1 code implementation3 Aug 2017 Nikhil Bansal, Daniel Dadush, Shashwat Garg, Shachar Lovett

An important result in discrepancy due to Banaszczyk states that for any set of $n$ vectors in $\mathbb{R}^m$ of $\ell_2$ norm at most $1$ and any convex body $K$ in $\mathbb{R}^m$ of Gaussian measure at least half, there exists a $\pm 1$ combination of these vectors which lies in $5K$.

Data Structures and Algorithms Discrete Mathematics

Near-optimal linear decision trees for k-SUM and related problems

no code implementations4 May 2017 Daniel M. Kane, Shachar Lovett, Shay Moran

We construct near optimal linear decision trees for a variety of decision problems in combinatorics and discrete geometry.

2k

Active classification with comparison queries

no code implementations11 Apr 2017 Daniel M. Kane, Shachar Lovett, Shay Moran, Jiapeng Zhang

We identify a combinatorial dimension, called the \emph{inference dimension}, that captures the query complexity when each additional query is determined by $O(1)$ examples (such as comparison queries, each of which is determined by the two compared examples).

Active Learning Classification +1

Cannot find the paper you are looking for? You can Submit a new open access paper.