# (Individual) Fairness for k-Clustering

Intuitively, if a set of $k$ random points are chosen from $P$ as centers, every point $x\in P$ expects to have a center within radius $r(x)$.

# Approximating Fair Clustering with Cascaded Norm Objectives

We utilize convex programming techniques to approximate the $(p, q)$-Fair Clustering problem for different values of $p$ and $q$.

# Improved Approximation Algorithms for Individually Fair Clustering

no code implementations26 Jun 2021,

We consider the $k$-clustering problem with $\ell_p$-norm cost, which includes $k$-median, $k$-means and $k$-center cost functions, under an individual notion of fairness proposed by Jung et al. [2020]: given a set of points $P$ of size $n$, a set of $k$ centers induces a fair clustering if for every point $v\in P$, $v$ can find a center among its $n/k$ closest neighbors.

# Approximation Algorithms for Socially Fair Clustering

no code implementations3 Mar 2021,

In order to obtain our result, we introduce a strengthened LP relaxation and show that it has an integrality gap of $\Theta(\frac{\log \ell}{\log\log\ell})$ for a fixed $p$.

# A framework for learned sparse sketches

In this work, we consider the problem of optimizing sketches to obtain low approximation error over a data distribution.

# Learning the Positions in CountSketch

Despite the growing body of work on this paradigm, a noticeable omission is that the locations of the non-zero entries of previous algorithms were fixed, and only their values were learned.

# Individual Fairness for $k$-Clustering

no code implementations17 Feb 2020,

Intuitively, if a set of $k$ random points are chosen from $P$ as centers, every point $x\in P$ expects to have a center within radius $r(x)$.

# Learning-Based Low-Rank Approximations

Our experiments show that, for multiple types of data sets, a learned sketch matrix can substantially reduce the approximation loss compared to a random matrix $S$, sometimes by one order of magnitude.

# Sample-Optimal Low-Rank Approximation of Distance Matrices

Recent work by Bakshi and Woodruff (NeurIPS 2018) showed it is possible to compute a rank-$k$ approximation of a distance matrix in time $O((n+m)^{1+\gamma}) \cdot \mathrm{poly}(k, 1/\epsilon)$, where $\epsilon>0$ is an error parameter and $\gamma>0$ is an arbitrarily small constant.

# Learning-Based Frequency Estimation Algorithms

Estimating the frequencies of elements in a data stream is a fundamental task in data analysis and machine learning.

# Scalable Fair Clustering

In the fair variant of $k$-median, the points are colored, and the goal is to minimize the same average distance objective while ensuring that all clusters have an "approximately equal" number of points of each color.

12
Cannot find the paper you are looking for? You can Submit a new open access paper.