no code implementations • 22 Mar 2023 • Mark Bun, Marco Gaboardi, Max Hopkins, Russell Impagliazzo, Rex Lei, Toniann Pitassi, Satchit Sivakumar, Jessica Sorrell
In particular, we give sample-efficient algorithmic reductions between perfect generalization, approximate differential privacy, and replicability for a broad class of statistical problems.
no code implementations • 13 Feb 2023 • Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan
We study a foundational variant of Valiant and Vapnik and Chervonenkis' Probably Approximately Correct (PAC)-Learning in which the adversary is restricted to a known family of marginal distributions $\mathscr{P}$.
no code implementations • 2 Oct 2022 • Robi Bhattacharjee, Max Hopkins, Akash Kumar, Hantao Yu, Kamalika Chaudhuri
Developing simple, sample-efficient learning algorithms for robust classification is a pressing issue in today's tech-dominated world, and current theoretical techniques requiring exponential sample complexity and complicated improper learning rules fall far from answering the need.
no code implementations • 24 Jan 2022 • Omri Ben-Eliezer, Max Hopkins, Chutong Yang, Hantao Yu
We initiate the study of active learning polynomial threshold functions (PTFs).
no code implementations • 8 Nov 2021 • Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan
The equivalence of realizable and agnostic learnability is a fundamental phenomenon in learning theory.
no code implementations • 9 Feb 2021 • Max Hopkins, Daniel Kane, Shachar Lovett, Michal Moshkovitz
The explosive growth of easily-accessible unlabeled data has lead to growing interest in active learning, a paradigm in which data-hungry learning algorithms adaptively select informative examples in order to lower prohibitively expensive labeling costs.
no code implementations • 23 Apr 2020 • Max Hopkins, Daniel M. Kane, Shachar Lovett, Gaurav Mahajan
Given a finite set $X \subset \mathbb{R}^d$ and a binary linear classifier $c: \mathbb{R}^d \to \{0, 1\}$, how many queries of the form $c(x)$ are required to learn the label of every point in $X$?
no code implementations • 15 Jan 2020 • Max Hopkins, Daniel Kane, Shachar Lovett, Gaurav Mahajan
With the explosion of massive, widely available unlabeled data in the past years, finding label and time efficient, robust learning algorithms has become ever more important in theory and in practice.
no code implementations • 17 Oct 2019 • Sebastian Wagner-Carena, Max Hopkins, Ana Diaz Rivero, Cora Dvorkin
We present a novel technique for Cosmic Microwave Background (CMB) foreground subtraction based on the framework of blind source separation.
no code implementations • NeurIPS 2020 • Max Hopkins, Daniel M. Kane, Shachar Lovett
While previous results show that active learning performs no better than its supervised alternative for important concept classes such as linear separators, we show that by adding weak distributional assumptions and allowing comparison queries, active learning requires exponentially fewer samples.
no code implementations • 3 Sep 2017 • Max Hopkins, Michael Mitzenmacher, Sebastian Wagner-Carena
JPEG is one of the most widely used image formats, but in some ways remains surprisingly unoptimized, perhaps because some natural optimizations would go outside the standard that defines JPEG.