Search Results for author: Edith Cohen

Found 18 papers, 0 papers with code

Tricking the Hashing Trick: A Tight Lower Bound on the Robustness of CountSketch to Adaptive Inputs

no code implementations3 Jul 2022 Edith Cohen, Jelani Nelson, Tamás Sarlós, Uri Stemmer

When inputs are adaptive, however, an adversarial input can be constructed after $O(\ell)$ queries with the classic estimator and the best known robust estimator only supports $\tilde{O}(\ell^2)$ queries.

Dimensionality Reduction

On the Robustness of CountSketch to Adaptive Inputs

no code implementations28 Feb 2022 Edith Cohen, Xin Lyu, Jelani Nelson, Tamás Sarlós, Moshe Shechner, Uri Stemmer

CountSketch is a popular dimensionality reduction technique that maps vectors to a lower dimension using randomized linear measurements.

Dimensionality Reduction

FriendlyCore: Practical Differentially Private Aggregation

no code implementations19 Oct 2021 Eliad Tsfadia, Edith Cohen, Haim Kaplan, Yishay Mansour, Uri Stemmer

Differentially private algorithms for common metric aggregation tasks, such as clustering or averaging, often have limited practicality due to their complexity or to the large number of data points that is required for accurate results.

A Framework for Adversarial Streaming via Differential Privacy and Difference Estimators

no code implementations30 Jul 2021 Idan Attias, Edith Cohen, Moshe Shechner, Uri Stemmer

Classical streaming algorithms operate under the (not always reasonable) assumption that the input stream is fixed in advance.

Differentially Private Weighted Sampling

no code implementations25 Oct 2020 Edith Cohen, Ofir Geri, Tamas Sarlos, Uri Stemmer

A weighted sample of keys by (a function of) frequency is a highly versatile summary that provides a sparse set of representative keys and supports approximate evaluations of query statistics.

WOR and $p$'s: Sketches for $\ell_p$-Sampling Without Replacement

no code implementations NeurIPS 2020 Edith Cohen, Rasmus Pagh, David P. Woodruff

We design novel composable sketches for WOR $\ell_p$ sampling, weighted sampling of keys according to a power $p\in[0, 2]$ of their frequency (or for signed data, sum of updates).

Graph Learning with Loss-Guided Training

no code implementations31 May 2020 Eliav Buchnik, Edith Cohen

Classically, ML models trained with stochastic gradient descent (SGD) are designed to minimize the average loss per example and use a distribution of training examples that remains {\em static} in the course of training.

Graph Learning

Sample Complexity Bounds for Influence Maximization

no code implementations31 Jul 2019 Gal Sadeh, Edith Cohen, Haim Kaplan

Our main result is a surprising upper bound of $O( s \tau \epsilon^{-2} \ln \frac{n}{\delta})$ for a broad class of models that includes IC and LT models and their mixtures, where $n$ is the number of nodes and $\tau$ is the number of diffusion steps.

LSH Microbatches for Stochastic Gradients: Value in Rearrangement

no code implementations ICLR 2019 Eliav Buchnik, Edith Cohen, Avinatan Hassidim, Yossi Matias

We make a principled argument for the properties of our arrangements that accelerate the training and present efficient algorithms to generate microbatches that respect the marginal distribution of training examples.

Self-Similar Epochs: Value in Arrangement

no code implementations ICLR 2019 Eliav Buchnik, Edith Cohen, Avinatan Hassidim, Yossi Matias

Optimization of machine learning models is commonly performed through stochastic gradient updates on randomly ordered training examples.

Clustering Small Samples with Quality Guarantees: Adaptivity with One2all pps

no code implementations12 Jun 2017 Edith Cohen, Shiri Chechik, Haim Kaplan

At the core of our design is the {\em one2all} construction of multi-objective probability-proportional-to-size (pps) samples: Given a set $M$ of centroids and $\alpha \geq 1$, one2all efficiently assigns probabilities to points so that the clustering cost of {\em each} $Q$ with cost $V(Q) \geq V(M)/\alpha$ can be estimated well from a sample of size $O(\alpha |M|\epsilon^{-2})$.

Bootstrapped Graph Diffusions: Exposing the Power of Nonlinearity

no code implementations7 Mar 2017 Eliav Buchnik, Edith Cohen

Classic methods capture the graph structure through some underlying diffusion process that propagates through the graph edges.

Semi-Supervised Learning on Graphs through Reach and Distance Diffusion

no code implementations30 Mar 2016 Edith Cohen

Inspired by the success of social influence as an alternative to spectral centrality such as Page Rank, we explore SSL with our kernels and develop highly scalable algorithms for parameter setting, label learning, and sampling.

Average Distance Queries through Weighted Samples in Graphs and Metric Spaces: High Scalability with Tight Statistical Guarantees

no code implementations30 Mar 2015 Shiri Chechik, Edith Cohen, Haim Kaplan

The estimate is based on a weighted sample of $O(\epsilon^{-2})$ pairs of points, which is computed using $O(n)$ distance computations.

Sketch-based Influence Maximization and Computation: Scaling up with Guarantees

no code implementations26 Aug 2014 Edith Cohen, Daniel Delling, Thomas Pajor, Renato F. Werneck

The gold standard for Influence Maximization is the greedy algorithm, which iteratively adds to the seed set a node maximizing the marginal gain in influence.

Data Structures and Algorithms Social and Information Networks G.2.2; H.2.8

All-Distances Sketches, Revisited: HIP Estimators for Massive Graphs Analysis

no code implementations14 Jun 2013 Edith Cohen

We present the Historic Inverse Probability (HIP) estimators which are applied to the ADS of a node to estimate a large natural class of statistics.

Data Structures and Algorithms Social and Information Networks

Cannot find the paper you are looking for? You can Submit a new open access paper.