Search Results for author: Kobbi Nissim

Found 23 papers, 4 papers with code

On Differentially Private Online Predictions

no code implementations27 Feb 2023 Haim Kaplan, Yishay Mansour, Shay Moran, Kobbi Nissim, Uri Stemmer

In this work we introduce an interactive variant of joint differential privacy towards handling online processes in which existing privacy definitions seem too restrictive.

The Sample Complexity of Distribution-Free Parity Learning in the Robust Shuffle Model

no code implementations29 Mar 2021 Kobbi Nissim, Chao Yan

We provide a lowerbound on the sample complexity of distribution-free parity learning in the realizable case in the shuffle model of differential privacy.

Separating Adaptive Streaming from Oblivious Streaming

no code implementations26 Jan 2021 Haim Kaplan, Yishay Mansour, Kobbi Nissim, Uri Stemmer

We present a streaming problem for which every adversarially-robust streaming algorithm must use polynomial space, while there exists a classical (oblivious) streaming algorithm that uses only polylogarithmic space.

Data Structures and Algorithms

On the Round Complexity of the Shuffle Model

no code implementations28 Sep 2020 Amos Beimel, Iftach Haitner, Kobbi Nissim, Uri Stemmer

Combining this primitive with the two-round semi-honest protocol of Applebaun et al. [TCC 2018], we obtain that every randomized functionality can be computed in the shuffle model with an honest majority, in merely two rounds.

The power of synergy in differential privacy: Combining a small curator with local randomizers

no code implementations18 Dec 2019 Amos Beimel, Aleksandra Korolova, Kobbi Nissim, Or Sheffet, Uri Stemmer

Motivated by the desire to bridge the utility gap between local and trusted curator models of differential privacy for practical applications, we initiate the theoretical study of a hybrid model introduced by "Blender" [Avent et al.,\ USENIX Security '17], in which differentially private protocols of n agents that work in the local-model are assisted by a differentially private curator that has access to the data of m additional users.

Two-sample testing

Differentially Private Summation with Multi-Message Shuffling

1 code implementation20 Jun 2019 Borja Balle, James Bell, Adria Gascon, Kobbi Nissim

In recent work, Cheu et al. (Eurocrypt 2019) proposed a protocol for $n$-party real summation in the shuffle model of differential privacy with $O_{\epsilon, \delta}(1)$ error and $\Theta(\epsilon\sqrt{n})$ one-bit messages per party.

The Privacy Blanket of the Shuffle Model

1 code implementation7 Mar 2019 Borja Balle, James Bell, Adria Gascon, Kobbi Nissim

Additionally, Erlingsson et al. (SODA 2019) provide a privacy amplification bound quantifying the level of curator differential privacy achieved by the shuffle model in terms of the local differential privacy of the randomizer used by each user.

Private Center Points and Learning of Halfspaces

no code implementations27 Feb 2019 Amos Beimel, Shay Moran, Kobbi Nissim, Uri Stemmer

The building block for this learner is a differentially private algorithm for locating an approximate center point of $m>\mathrm{poly}(d, 2^{\log^*|X|})$ points -- a high dimensional generalization of the median function.

The Limits of Post-Selection Generalization

no code implementations NeurIPS 2018 Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer, Jonathan Ullman

While statistics and machine learning offers numerous methods for ensuring generalization, these methods often fail in the presence of adaptivity---the common practice in which the choice of analysis depends on previous interactions with the same dataset.

$\mathcal{E}\text{psolute}$: Efficiently Querying Databases While Providing Differential Privacy

1 code implementation5 Jun 2017 Georgios Kellaris, George Kollios, Kobbi Nissim, Adam O'Neill

In this work we present a model for differentially private outsourced database system and a concrete construction, $\mathcal{E}\text{psolute}$, that provably conceals the aforementioned leakages, while remaining efficient and scalable.

Cryptography and Security

Concentration Bounds for High Sensitivity Functions Through Differential Privacy

no code implementations6 Mar 2017 Kobbi Nissim, Uri Stemmer

We show that differential privacy can be used to prove concentration bounds for such functions in the non-adaptive setting.

Vocal Bursts Intensity Prediction

Private Incremental Regression

no code implementations4 Jan 2017 Shiva Prasad Kasiviswanathan, Kobbi Nissim, Hongxia Jin

Our first contribution is a generic transformation of private batch ERM mechanisms into private incremental ERM mechanisms, based on a simple idea of invoking the private batch ERM procedure at some regular time intervals.

BIG-bench Machine Learning regression

PSI (Ψ): a Private data Sharing Interface

3 code implementations14 Sep 2016 Marco Gaboardi, James Honaker, Gary King, Jack Murtagh, Kobbi Nissim, Jonathan Ullman, Salil Vadhan

We provide an overview of PSI ("a Private data Sharing Interface"), a system we are developing to enable researchers in the social sciences and other fields to share and explore privacy-sensitive datasets with the strong privacy protections of differential privacy.

Cryptography and Security Computers and Society Methodology

Locating a Small Cluster Privately

no code implementations19 Apr 2016 Kobbi Nissim, Uri Stemmer, Salil Vadhan

We present a new algorithm for locating a small cluster of points with differential privacy [Dwork, McSherry, Nissim, and Smith, 2006].

Clustering

Adaptive Learning with Robust Generalization Guarantees

no code implementations24 Feb 2016 Rachel Cummings, Katrina Ligett, Kobbi Nissim, Aaron Roth, Zhiwei Steven Wu

We also show that perfect generalization is a strictly stronger guarantee than differential privacy, but that, nevertheless, many learning tasks can be carried out subject to the guarantees of perfect generalization.

Simultaneous Private Learning of Multiple Concepts

no code implementations27 Nov 2015 Mark Bun, Kobbi Nissim, Uri Stemmer

We investigate the direct-sum problem in the context of differentially private PAC learning: What is the sample complexity of solving $k$ learning tasks simultaneously under differential privacy, and how does this cost compare to that of solving $k$ learning tasks without privacy?

PAC learning

Algorithmic Stability for Adaptive Data Analysis

no code implementations8 Nov 2015 Raef Bassily, Kobbi Nissim, Adam Smith, Thomas Steinke, Uri Stemmer, Jonathan Ullman

Specifically, suppose there is an unknown distribution $\mathbf{P}$ and a set of $n$ independent samples $\mathbf{x}$ is drawn from $\mathbf{P}$.

Differentially Private Release and Learning of Threshold Functions

no code implementations28 Apr 2015 Mark Bun, Kobbi Nissim, Uri Stemmer, Salil Vadhan

Our sample complexity upper and lower bounds also apply to the tasks of learning distributions with respect to Kolmogorov distance and of properly PAC learning thresholds with differential privacy.

PAC learning

On the Generalization Properties of Differential Privacy

no code implementations22 Apr 2015 Kobbi Nissim, Uri Stemmer

Very recently, Bassily et al. presented an improved bound and showed that (a variant of) the private multiplicative weights algorithm can answer $k$ adaptively chosen statistical queries using sample complexity that grows logarithmically in $k$.

Private Learning and Sanitization: Pure vs. Approximate Differential Privacy

no code implementations10 Jul 2014 Amos Beimel, Kobbi Nissim, Uri Stemmer

We show that the sample complexity of these tasks under approximate differential privacy can be significantly lower than that under pure differential privacy.

Learning Privately with Labeled and Unlabeled Examples

no code implementations10 Jul 2014 Amos Beimel, Kobbi Nissim, Uri Stemmer

In 2008, Kasiviswanathan et al. (FOCS 2008) gave a generic construction of private learners, in which the sample complexity is (generally) higher than what is needed for non-private learners.

Active Learning

Characterizing the Sample Complexity of Private Learners

no code implementations10 Feb 2014 Amos Beimel, Kobbi Nissim, Uri Stemmer

Kasiviswanathan et al. gave a generic construction of private learners for (finite) concept classes, with sample complexity logarithmic in the size of the concept class.

PAC learning

What Can We Learn Privately?

no code implementations6 Mar 2008 Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, Adam Smith

Therefore, almost anything learnable is learnable privately: specifically, if a concept class is learnable by a (non-private) algorithm with polynomial sample complexity and output size, then it can be learned privately using a polynomial number of samples.

Cannot find the paper you are looking for? You can Submit a new open access paper.