Search Results for author: Sepideh Mahabadi

Found 11 papers, 1 papers with code

(Individual) Fairness for k-Clustering

no code implementations ICML 2020 Sepideh Mahabadi, Ali Vakilian

Intuitively, if a set of $k$ random points are chosen from $P$ as centers, every point $x\in P$ expects to have a center within radius $r(x)$.

Fairness

Adaptive Sketches for Robust Regression with Importance Sampling

no code implementations16 Jul 2022 Sepideh Mahabadi, David P. Woodruff, Samson Zhou

In this paper, we introduce an algorithm that approximately samples $T$ gradients of dimension $d$ from nearly the optimal importance sampling distribution for a robust regression problem over $n$ rows.

regression

Sampling a Near Neighbor in High Dimensions -- Who is the Fairest of Them All?

1 code implementation26 Jan 2021 Martin Aumüller, Sariel Har-Peled, Sepideh Mahabadi, Rasmus Pagh, Francesco Silvestri

Given a set of points $S$ and a radius parameter $r>0$, the $r$-near neighbor ($r$-NN) problem asks for a data structure that, given any query point $q$, returns a point $p$ within distance at most $r$ from $q$.

Fairness

Adaptive Single-Pass Stochastic Gradient Descent in Input Sparsity Time

no code implementations1 Jan 2021 Sepideh Mahabadi, David Woodruff, Samson Zhou

Moreover, we show that our algorithm can be generalized to approximately sample Hessians and thus provides variance reduction for second-order methods as well.

Second-order methods Stochastic Optimization

Streaming Complexity of SVMs

no code implementations7 Jul 2020 Alexandr Andoni, Collin Burns, Yi Li, Sepideh Mahabadi, David P. Woodruff

We show that, for both problems, for dimensions $d=1, 2$, one can obtain streaming algorithms with space polynomially smaller than $\frac{1}{\lambda\epsilon}$, which is the complexity of SGD for strongly convex functions like the bias-regularized SVM, and which is known to be tight in general, even for $d=1$.

Non-Adaptive Adaptive Sampling on Turnstile Streams

no code implementations23 Apr 2020 Sepideh Mahabadi, Ilya Razenshteyn, David P. Woodruff, Samson Zhou

Adaptive sampling is a useful algorithmic tool for data summarization problems in the classical centralized setting, where the entire dataset is available to the single processor performing the computation.

Data Summarization

Individual Fairness for $k$-Clustering

no code implementations17 Feb 2020 Sepideh Mahabadi, Ali Vakilian

Intuitively, if a set of $k$ random points are chosen from $P$ as centers, every point $x\in P$ expects to have a center within radius $r(x)$.

Fairness

Composable Core-sets for Determinant Maximization: A Simple Near-Optimal Algorithm

no code implementations6 Jul 2019 Piotr Indyk, Sepideh Mahabadi, Shayan Oveis Gharan, Alireza Rezaei

In this work, first we provide a theoretical approximation guarantee of $O(C^{k^2})$ for the Greedy algorithm in the context of composable core-sets; Further, we propose to use a Local Search based algorithm that while being still practical, achieves a nearly optimal approximation bound of $O(k)^{2k}$; Finally, we implement all three algorithms and show the effectiveness of our proposed algorithm on standard data sets.

Fairness Point Processes

Near Neighbor: Who is the Fairest of Them All?

no code implementations NeurIPS 2019 Sariel Har-Peled, Sepideh Mahabadi

Namely, given a set of $n$ points $P$ and a parameter $r$, the goal is to preprocess the points, such that given a query point $q$, any point in the $r$-neighborhood of the query, i. e., $\ball(q, r)$, have the same probability of being reported as the near neighbor.

Composable Core-sets for Determinant Maximization Problems via Spectral Spanners

no code implementations31 Jul 2018 Piotr Indyk, Sepideh Mahabadi, Shayan Oveis Gharan, Alireza Rezaei

We show that for many objective functions one can use a spectral spanner, independent of the underlying functions, as a core-set and obtain almost optimal composable core-sets.

Cannot find the paper you are looking for? You can Submit a new open access paper.