Search Results for author: Haim Kaplan

Found 29 papers, 0 papers with code

Learning-Augmented Algorithms with Explicit Predictors

no code implementations12 Mar 2024 Marek Elias, Haim Kaplan, Yishay Mansour, Shay Moran

Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.

Scheduling

On Differentially Private Online Predictions

no code implementations27 Feb 2023 Haim Kaplan, Yishay Mansour, Shay Moran, Kobbi Nissim, Uri Stemmer

In this work we introduce an interactive variant of joint differential privacy towards handling online processes in which existing privacy definitions seem too restrictive.

Concurrent Shuffle Differential Privacy Under Continual Observation

no code implementations29 Jan 2023 Jay Tenenbaum, Haim Kaplan, Yishay Mansour, Uri Stemmer

the counter problem) and show that the concurrent shuffle model allows for significantly improved error compared to a standard (single) shuffle model.

2k

Monotone Learning

no code implementations10 Feb 2022 Olivier Bousquet, Amit Daniely, Haim Kaplan, Yishay Mansour, Shay Moran, Uri Stemmer

Our transformation readily implies monotone learners in a variety of contexts: for example it extends Pestov's result to classification tasks with an arbitrary number of labels.

Binary Classification Classification +1

FriendlyCore: Practical Differentially Private Aggregation

no code implementations19 Oct 2021 Eliad Tsfadia, Edith Cohen, Haim Kaplan, Yishay Mansour, Uri Stemmer

Differentially private algorithms for common metric aggregation tasks, such as clustering or averaging, often have limited practicality due to their complexity or to the large number of data points that is required for accurate results.

Clustering

Differentially Private Approximate Quantiles

no code implementations11 Oct 2021 Haim Kaplan, Shachar Schnapp, Uri Stemmer

In this work we study the problem of differentially private (DP) quantiles, in which given dataset $X$ and quantiles $q_1, ..., q_m \in [0, 1]$, we want to output $m$ quantile estimations which are as close as possible to the true quantiles and preserve DP.

Differentially Private Multi-Armed Bandits in the Shuffle Model

no code implementations NeurIPS 2021 Jay Tenenbaum, Haim Kaplan, Yishay Mansour, Uri Stemmer

We give an $(\varepsilon,\delta)$-differentially private algorithm for the multi-armed bandit (MAB) problem in the shuffle model with a distribution-dependent regret of $O\left(\left(\sum_{a\in [k]:\Delta_a>0}\frac{\log T}{\Delta_a}\right)+\frac{k\sqrt{\log\frac{1}{\delta}}\log T}{\varepsilon}\right)$, and a distribution-independent regret of $O\left(\sqrt{kT\log T}+\frac{k\sqrt{\log\frac{1}{\delta}}\log T}{\varepsilon}\right)$, where $T$ is the number of rounds, $\Delta_a$ is the suboptimality gap of the arm $a$, and $k$ is the total number of arms.

Multi-Armed Bandits

Online Markov Decision Processes with Aggregate Bandit Feedback

no code implementations31 Jan 2021 Alon Cohen, Haim Kaplan, Tomer Koren, Yishay Mansour

We study a novel variant of online finite-horizon Markov Decision Processes with adversarially changing loss functions and initially unknown dynamics.

Separating Adaptive Streaming from Oblivious Streaming

no code implementations26 Jan 2021 Haim Kaplan, Yishay Mansour, Kobbi Nissim, Uri Stemmer

We present a streaming problem for which every adversarially-robust streaming algorithm must use polynomial space, while there exists a classical (oblivious) streaming algorithm that uses only polylogarithmic space.

Data Structures and Algorithms

Locality Sensitive Hashing for Efficient Similar Polygon Retrieval

no code implementations12 Jan 2021 Haim Kaplan, Jay Tenenbaum

Find the vertical translation of a function $ f $ that is closest in $ L_1 $ distance to a function $ g $.

Retrieval Translation

The Sparse Vector Technique, Revisited

no code implementations2 Oct 2020 Haim Kaplan, Yishay Mansour, Uri Stemmer

This simple algorithm privately tests whether the value of a given query on a database is close to what we expect it to be.

Private Learning of Halfspaces: Simplifying the Construction and Reducing the Sample Complexity

no code implementations NeurIPS 2020 Haim Kaplan, Yishay Mansour, Uri Stemmer, Eliad Tsfadia

We present a differentially private learner for halfspaces over a finite grid $G$ in $\mathbb{R}^d$ with sample complexity $\approx d^{2. 5}\cdot 2^{\log^*|G|}$, which improves the state-of-the-art result of [Beimel et al., COLT 2019] by a $d^2$ factor.

Locality Sensitive Hashing for Set-Queries, Motivated by Group Recommendations

no code implementations15 Apr 2020 Haim Kaplan, Jay Tenenbaum

For example, we can take $ s(p, x) $ to be the angular similarity between $ p $ and $ x $ (i. e., $1-{\angle (x, p)}/{\pi}$), and aggregate by arithmetic or geometric averaging, or taking the lowest similarity.

Recommendation Systems

Adversarially Robust Streaming Algorithms via Differential Privacy

no code implementations NeurIPS 2020 Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Yossi Matias, Uri Stemmer

A streaming algorithm is said to be adversarially robust if its accuracy guarantees are maintained even when the data stream is chosen maliciously, by an adaptive adversary.

Adversarial Robustness

How to Find a Point in the Convex Hull Privately

no code implementations30 Mar 2020 Haim Kaplan, Micha Sharir, Uri Stemmer

We study the question of how to compute a point in the convex hull of an input set $S$ of $n$ points in ${\mathbb R}^d$ in a differentially private manner.

Position

Near-optimal Regret Bounds for Stochastic Shortest Path

no code implementations ICML 2020 Alon Cohen, Haim Kaplan, Yishay Mansour, Aviv Rosenberg

In this work we remove this dependence on the minimum cost---we give an algorithm that guarantees a regret bound of $\widetilde{O}(B_\star |S| \sqrt{|A| K})$, where $B_\star$ is an upper bound on the expected cost of the optimal policy, $S$ is the set of states, $A$ is the set of actions and $K$ is the number of episodes.

Reinforcement Learning (RL)

Privately Learning Thresholds: Closing the Exponential Gap

no code implementations22 Nov 2019 Haim Kaplan, Katrina Ligett, Yishay Mansour, Moni Naor, Uri Stemmer

This problem has received much attention recently; unlike the non-private case, where the sample complexity is independent of the domain size and just depends on the desired accuracy and confidence, for private learning the sample complexity must depend on the domain size $X$ (even for approximate differential privacy).

Apprenticeship Learning via Frank-Wolfe

no code implementations5 Nov 2019 Tom Zahavy, Alon Cohen, Haim Kaplan, Yishay Mansour

Specifically, we show that a variation of the FW method that is based on taking "away steps" achieves a linear rate of convergence when applied to AL and that a stochastic version of the FW algorithm can be used to avoid precise estimation of feature expectations.

Sample Complexity Bounds for Influence Maximization

no code implementations31 Jul 2019 Gal Sadeh, Edith Cohen, Haim Kaplan

Our main result is a surprising upper bound of $O( s \tau \epsilon^{-2} \ln \frac{n}{\delta})$ for a broad class of models that includes IC and LT models and their mixtures, where $n$ is the number of nodes and $\tau$ is the number of diffusion steps.

Thompson Sampling for Adversarial Bit Prediction

no code implementations21 Jun 2019 Yuval Lewi, Haim Kaplan, Yishay Mansour

We also bound the regret of those sequences, the worse case sequences have regret $O(\sqrt{T})$ and the best case sequence have regret $O(1)$.

Thompson Sampling

Unknown mixing times in apprenticeship and reinforcement learning

no code implementations23 May 2019 Tom Zahavy, Alon Cohen, Haim Kaplan, Yishay Mansour

We derive and analyze learning algorithms for apprenticeship learning, policy evaluation, and policy gradient for average reward criteria.

reinforcement-learning Reinforcement Learning (RL)

Learning to Screen

no code implementations NeurIPS 2019 Alon Cohen, Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Shay Moran

(ii) In the second variant it is assumed that before the process starts, the algorithm has an access to a training set of $n$ items drawn independently from the same unknown distribution (e. g.\ data of candidates from previous recruitment seasons).

Differentially Private Learning of Geometric Concepts

no code implementations13 Feb 2019 Haim Kaplan, Yishay Mansour, Yossi Matias, Uri Stemmer

We present differentially private efficient algorithms for learning union of polygons in the plane (which are not necessarily convex).

PAC learning

Clustering Small Samples with Quality Guarantees: Adaptivity with One2all pps

no code implementations12 Jun 2017 Edith Cohen, Shiri Chechik, Haim Kaplan

At the core of our design is the {\em one2all} construction of multi-objective probability-proportional-to-size (pps) samples: Given a set $M$ of centroids and $\alpha \geq 1$, one2all efficiently assigns probabilities to points so that the clustering cost of {\em each} $Q$ with cost $V(Q) \geq V(M)/\alpha$ can be estimated well from a sample of size $O(\alpha |M|\epsilon^{-2})$.

Clustering

Average Distance Queries through Weighted Samples in Graphs and Metric Spaces: High Scalability with Tight Statistical Guarantees

no code implementations30 Mar 2015 Shiri Chechik, Edith Cohen, Haim Kaplan

The estimate is based on a weighted sample of $O(\epsilon^{-2})$ pairs of points, which is computed using $O(n)$ distance computations.

Cannot find the paper you are looking for? You can Submit a new open access paper.