no code implementations • ICML 2020 • Sepideh Mahabadi, Ali Vakilian
Intuitively, if a set of $k$ random points are chosen from $P$ as centers, every point $x\in P$ expects to have a center within radius $r(x)$.
no code implementations • 15 Mar 2024 • Ron Mosenzon, Ali Vakilian
In this paper, we study the natural local search algorithm for IP stable clustering.
no code implementations • 27 Feb 2024 • Adela Frances DePavia, Erasmo Tani, Ali Vakilian
Finally, we provide alternative simpler performance bounds on the algorithms of Banerjee et al. (2022) for the case of searching on a known graph, and establish new lower bounds for this setting.
no code implementations • 13 Feb 2024 • Lee Cohen, Saeed Sharifi-Malvajerdi, Kevin Stangl, Ali Vakilian, Juba Ziani
We initiate the study of partial information release by the learner in strategic classification.
no code implementations • NeurIPS 2023 • Anders Aamand, Justin Y. Chen, Huy Lê Nguyen, Sandeep Silwal, Ali Vakilian
In particular, their learning-augmented frequency estimation algorithm uses a learned heavy-hitter oracle which predicts which elements will appear many times in the stream.
no code implementations • 11 Jun 2023 • Yi Li, Honghao Lin, Simin Liu, Ali Vakilian, David P. Woodruff
We fix this issue and propose approaches for learning a sketching matrix for both low-rank approximation and Hessian approximation for second order optimization.
no code implementations • 11 Jun 2023 • Sèdjro S. Hotegni, Sepideh Mahabadi, Ali Vakilian
This paper studies the fair range clustering problem in which the data points are from different demographic groups and the goal is to pick $k$ centers with the minimum clustering cost such that each group is at least minimally represented in the centers set and no group dominates the centers set.
no code implementations • 31 Jan 2023 • Lee Cohen, Saeed Sharifi-Malvajerdi, Kevin Stangl, Ali Vakilian, Juba Ziani
We initiate the study of strategic behavior in screening processes with multiple classifiers.
1 code implementation • 7 Jul 2022 • Saba Ahmadi, Pranjal Awasthi, Samir Khuller, Matthäus Kleindessner, Jamie Morgenstern, Pattara Sukprasert, Ali Vakilian
In this paper, we propose a natural notion of individual preference (IP) stability for clustering, which asks that every data point, on average, is closer to the points in its own cluster than to the points in any other cluster.
no code implementations • 14 Mar 2022 • Avrim Blum, Kevin Stangl, Ali Vakilian
Even if the firm is required to interview all of those who pass the final round, the tests themselves could have the property that qualified individuals from some groups pass more easily than qualified individuals from others.
no code implementations • 3 Feb 2022 • Zhen Dai, Yury Makarychev, Ali Vakilian
For this special case, we present an $O(\log k)$-approximation algorithm that runs in $(kf)^{O(\ell)}\log n + poly(n)$ time.
no code implementations • 8 Nov 2021 • Eden Chlamtáč, Yury Makarychev, Ali Vakilian
We utilize convex programming techniques to approximate the $(p, q)$-Fair Clustering problem for different values of $p$ and $q$.
no code implementations • 26 Jun 2021 • Ali Vakilian, Mustafa Yalçıner
Moreover, for $p=1$ ($k$-median) and $p=\infty$ ($k$-center), we present improved cost-approximation factors $7. 081+\varepsilon$ and $3+\varepsilon$ respectively.
no code implementations • 3 Mar 2021 • Yury Makarychev, Ali Vakilian
In order to obtain our result, we introduce a strengthened LP relaxation and show that it has an integrality gap of $\Theta(\frac{\log \ell}{\log\log\ell})$ for a fixed $p$.
no code implementations • 1 Jan 2021 • Simin Liu, Tianrui Liu, Ali Vakilian, Yulin Wan, David Woodruff
In this work, we consider the problem of optimizing sketches to obtain low approximation error over a data distribution.
no code implementations • 20 Jul 2020 • Simin Liu, Tianrui Liu, Ali Vakilian, Yulin Wan, David P. Woodruff
Despite the growing body of work on this paradigm, a noticeable omission is that the locations of the non-zero entries of previous algorithms were fixed, and only their values were learned.
1 code implementation • 17 Feb 2020 • Sepideh Mahabadi, Ali Vakilian
Intuitively, if a set of $k$ random points are chosen from $P$ as centers, every point $x\in P$ expects to have a center within radius $r(x)$.
no code implementations • NeurIPS 2019 • Piotr Indyk, Ali Vakilian, Yang Yuan
Our experiments show that, for multiple types of data sets, a learned sketch matrix can substantially reduce the approximation loss compared to a random matrix $S$, sometimes by one order of magnitude.
no code implementations • 2 Jun 2019 • Piotr Indyk, Ali Vakilian, Tal Wagner, David Woodruff
Recent work by Bakshi and Woodruff (NeurIPS 2018) showed it is possible to compute a rank-$k$ approximation of a distance matrix in time $O((n+m)^{1+\gamma}) \cdot \mathrm{poly}(k, 1/\epsilon)$, where $\epsilon>0$ is an error parameter and $\gamma>0$ is an arbitrarily small constant.
no code implementations • ICLR 2019 • Chen-Yu Hsu, Piotr Indyk, Dina Katabi, Ali Vakilian
Estimating the frequencies of elements in a data stream is a fundamental task in data analysis and machine learning.
1 code implementation • 10 Feb 2019 • Arturs Backurs, Piotr Indyk, Krzysztof Onak, Baruch Schieber, Ali Vakilian, Tal Wagner
In the fair variant of $k$-median, the points are colored, and the goal is to minimize the same average distance objective while ensuring that all clusters have an "approximately equal" number of points of each color.