no code implementations • 12 Mar 2024 • Marek Elias, Haim Kaplan, Yishay Mansour, Shay Moran
Recent advances in algorithmic design show how to utilize predictions obtained by machine learning models from past and present data.
no code implementations • 27 Feb 2023 • Haim Kaplan, Yishay Mansour, Shay Moran, Kobbi Nissim, Uri Stemmer
In this work we introduce an interactive variant of joint differential privacy towards handling online processes in which existing privacy definitions seem too restrictive.
no code implementations • 29 Jan 2023 • Jay Tenenbaum, Haim Kaplan, Yishay Mansour, Uri Stemmer
the counter problem) and show that the concurrent shuffle model allows for significantly improved error compared to a standard (single) shuffle model.
no code implementations • 8 Dec 2022 • Olivier Bousquet, Haim Kaplan, Aryeh Kontorovich, Yishay Mansour, Shay Moran, Menachem Sadigurschi, Uri Stemmer
We construct a universally Bayes consistent learning rule that satisfies differential privacy (DP).
no code implementations • 10 Feb 2022 • Olivier Bousquet, Amit Daniely, Haim Kaplan, Yishay Mansour, Shay Moran, Uri Stemmer
Our transformation readily implies monotone learners in a variety of contexts: for example it extends Pestov's result to classification tasks with an arbitrary number of labels.
no code implementations • 29 Dec 2021 • Edith Cohen, Haim Kaplan, Yishay Mansour, Uri Stemmer, Eliad Tsfadia
Clustering is a fundamental problem in data analysis.
no code implementations • 19 Oct 2021 • Eliad Tsfadia, Edith Cohen, Haim Kaplan, Yishay Mansour, Uri Stemmer
Differentially private algorithms for common metric aggregation tasks, such as clustering or averaging, often have limited practicality due to their complexity or to the large number of data points that is required for accurate results.
no code implementations • 11 Oct 2021 • Haim Kaplan, Shachar Schnapp, Uri Stemmer
In this work we study the problem of differentially private (DP) quantiles, in which given dataset $X$ and quantiles $q_1, ..., q_m \in [0, 1]$, we want to output $m$ quantile estimations which are as close as possible to the true quantiles and preserve DP.
no code implementations • NeurIPS 2021 • Jay Tenenbaum, Haim Kaplan, Yishay Mansour, Uri Stemmer
We give an $(\varepsilon,\delta)$-differentially private algorithm for the multi-armed bandit (MAB) problem in the shuffle model with a distribution-dependent regret of $O\left(\left(\sum_{a\in [k]:\Delta_a>0}\frac{\log T}{\Delta_a}\right)+\frac{k\sqrt{\log\frac{1}{\delta}}\log T}{\varepsilon}\right)$, and a distribution-independent regret of $O\left(\sqrt{kT\log T}+\frac{k\sqrt{\log\frac{1}{\delta}}\log T}{\varepsilon}\right)$, where $T$ is the number of rounds, $\Delta_a$ is the suboptimality gap of the arm $a$, and $k$ is the total number of arms.
no code implementations • 31 Jan 2021 • Alon Cohen, Haim Kaplan, Tomer Koren, Yishay Mansour
We study a novel variant of online finite-horizon Markov Decision Processes with adversarially changing loss functions and initially unknown dynamics.
no code implementations • 26 Jan 2021 • Haim Kaplan, Yishay Mansour, Kobbi Nissim, Uri Stemmer
We present a streaming problem for which every adversarially-robust streaming algorithm must use polynomial space, while there exists a classical (oblivious) streaming algorithm that uses only polylogarithmic space.
Data Structures and Algorithms
no code implementations • 12 Jan 2021 • Haim Kaplan, Jay Tenenbaum
Find the vertical translation of a function $ f $ that is closest in $ L_1 $ distance to a function $ g $.
no code implementations • 2 Oct 2020 • Haim Kaplan, Yishay Mansour, Uri Stemmer
This simple algorithm privately tests whether the value of a given query on a database is close to what we expect it to be.
no code implementations • NeurIPS 2020 • Haim Kaplan, Yishay Mansour, Uri Stemmer, Eliad Tsfadia
We present a differentially private learner for halfspaces over a finite grid $G$ in $\mathbb{R}^d$ with sample complexity $\approx d^{2. 5}\cdot 2^{\log^*|G|}$, which improves the state-of-the-art result of [Beimel et al., COLT 2019] by a $d^2$ factor.
no code implementations • 15 Apr 2020 • Haim Kaplan, Jay Tenenbaum
For example, we can take $ s(p, x) $ to be the angular similarity between $ p $ and $ x $ (i. e., $1-{\angle (x, p)}/{\pi}$), and aggregate by arithmetic or geometric averaging, or taking the lowest similarity.
no code implementations • NeurIPS 2020 • Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Yossi Matias, Uri Stemmer
A streaming algorithm is said to be adversarially robust if its accuracy guarantees are maintained even when the data stream is chosen maliciously, by an adaptive adversary.
no code implementations • 30 Mar 2020 • Haim Kaplan, Micha Sharir, Uri Stemmer
We study the question of how to compute a point in the convex hull of an input set $S$ of $n$ points in ${\mathbb R}^d$ in a differentially private manner.
no code implementations • ICML 2020 • Alon Cohen, Haim Kaplan, Yishay Mansour, Aviv Rosenberg
In this work we remove this dependence on the minimum cost---we give an algorithm that guarantees a regret bound of $\widetilde{O}(B_\star |S| \sqrt{|A| K})$, where $B_\star$ is an upper bound on the expected cost of the optimal policy, $S$ is the set of states, $A$ is the set of actions and $K$ is the number of episodes.
no code implementations • 22 Nov 2019 • Haim Kaplan, Katrina Ligett, Yishay Mansour, Moni Naor, Uri Stemmer
This problem has received much attention recently; unlike the non-private case, where the sample complexity is independent of the domain size and just depends on the desired accuracy and confidence, for private learning the sample complexity must depend on the domain size $X$ (even for approximate differential privacy).
no code implementations • 5 Nov 2019 • Tom Zahavy, Alon Cohen, Haim Kaplan, Yishay Mansour
Specifically, we show that a variation of the FW method that is based on taking "away steps" achieves a linear rate of convergence when applied to AL and that a stochastic version of the FW algorithm can be used to avoid precise estimation of feature expectations.
no code implementations • 31 Jul 2019 • Gal Sadeh, Edith Cohen, Haim Kaplan
Our main result is a surprising upper bound of $O( s \tau \epsilon^{-2} \ln \frac{n}{\delta})$ for a broad class of models that includes IC and LT models and their mixtures, where $n$ is the number of nodes and $\tau$ is the number of diffusion steps.
no code implementations • 21 Jun 2019 • Yuval Lewi, Haim Kaplan, Yishay Mansour
We also bound the regret of those sequences, the worse case sequences have regret $O(\sqrt{T})$ and the best case sequence have regret $O(1)$.
no code implementations • 23 May 2019 • Tom Zahavy, Alon Cohen, Haim Kaplan, Yishay Mansour
We derive and analyze learning algorithms for apprenticeship learning, policy evaluation, and policy gradient for average reward criteria.
no code implementations • 26 Feb 2019 • Tom Zahavy, Avinatan Hasidim, Haim Kaplan, Yishay Mansour
We consider a settings of hierarchical reinforcement learning, in which the reward is a sum of components.
Hierarchical Reinforcement Learning reinforcement-learning +2
no code implementations • NeurIPS 2019 • Alon Cohen, Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Shay Moran
(ii) In the second variant it is assumed that before the process starts, the algorithm has an access to a training set of $n$ items drawn independently from the same unknown distribution (e. g.\ data of candidates from previous recruitment seasons).
no code implementations • 13 Feb 2019 • Haim Kaplan, Yishay Mansour, Yossi Matias, Uri Stemmer
We present differentially private efficient algorithms for learning union of polygons in the plane (which are not necessarily convex).
no code implementations • 13 Mar 2018 • Tom Zahavy, Avinatan Hasidim, Haim Kaplan, Yishay Mansour
In this work, we provide theoretical guarantees for reward decomposition in deterministic MDPs.
Hierarchical Reinforcement Learning reinforcement-learning +2
no code implementations • 12 Jun 2017 • Edith Cohen, Shiri Chechik, Haim Kaplan
At the core of our design is the {\em one2all} construction of multi-objective probability-proportional-to-size (pps) samples: Given a set $M$ of centroids and $\alpha \geq 1$, one2all efficiently assigns probabilities to points so that the clustering cost of {\em each} $Q$ with cost $V(Q) \geq V(M)/\alpha$ can be estimated well from a sample of size $O(\alpha |M|\epsilon^{-2})$.
no code implementations • 30 Mar 2015 • Shiri Chechik, Edith Cohen, Haim Kaplan
The estimate is based on a weighted sample of $O(\epsilon^{-2})$ pairs of points, which is computed using $O(n)$ distance computations.