1 code implementation • 25 Aug 2023 • Sruthi Gorantla, Eshaan Bhansali, Amit Deshpande, Anand Louis
Previous works have proposed efficient algorithms to train stochastic ranking models that achieve fairness of exposure to the groups ex-ante (or, in expectation), which may not guarantee representation fairness to the groups ex-post, that is, after realizing a ranking from the stochastic ranking model.
no code implementations • 21 Jun 2023 • Sruthi Gorantla, Anay Mehrotra, Amit Deshpande, Anand Louis
Fair ranking tasks, which ask to rank a set of items to maximize utility subject to satisfying group-fairness constraints, have gained significant interest in the Algorithmic Fairness, Information Retrieval, and Machine Learning literature.
no code implementations • 22 Aug 2022 • Sruthi Gorantla, Kishen N. Gowda, Amit Deshpande, Anand Louis
Center-based clustering (e. g., $k$-means, $k$-medians) and clustering using linear subspaces are two most popular techniques to partition real-world data into smaller clusters.
1 code implementation • 21 Aug 2022 • Atasi Panda, Anand Louis, Prajakta Nimbhorkar
Our first result is a polynomial-time algorithm that computes a distribution over `group-fair' matchings such that the individual fairness constraints are approximately satisfied and the expected size of a matching is close to OPT.
2 code implementations • 2 Mar 2022 • Sruthi Gorantla, Amit Deshpande, Anand Louis
Our second random walk-based algorithm samples ex-post group-fair rankings from a distribution $\delta$-close to $D$ in total variation distance and has expected running time $O^*(k^2\ell^2)$, when there is a sufficient gap between the given upper and lower bounds on the group-wise representation.
2 code implementations • 24 Sep 2020 • Sruthi Gorantla, Amit Deshpande, Anand Louis
We give a fair ranking algorithm that takes any given ranking and outputs another ranking with simultaneous underranking and group fairness guarantees comparable to the lower bound we prove.
no code implementations • 14 Jul 2020 • Karthik Abinav Sankararaman, Anand Louis, Navin Goyal
First, for a large and well-studied class of LSEMs, namely ``bow free'' models, we provide a sufficient condition on model parameters under which robust identifiability holds, thereby removing the restriction of paths required by prior work.
1 code implementation • NeurIPS 2019 • Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, Partha Talukdar
In many real-world network datasets such as co-authorship, co-citation, email communication, etc., relationships are complex and go beyond pairwise.
no code implementations • NeurIPS Workshop Neuro_AI 2019 • Sruthi Gorantla, Anand Louis, Christos H. Papadimitriou, Santosh Vempala, Naganand Yadati
Artificial neural networks (ANNs) lack in biological plausibility, chiefly because backpropagation requires a variant of plasticity (precise changes of the synaptic weights informed by neural events that occur downstream in the neural circuit) that is profoundly incompatible with the current understanding of the animal brain.
no code implementations • 25 Sep 2019 • Naganand Yadati, Tingran Gao, Shahab Asoodeh, Partha Talukdar, Anand Louis
In this paper, we explore GNNs for graph-based SSL of histograms.
no code implementations • 16 May 2019 • Karthik Abinav Sankararaman, Anand Louis, Navin Goyal
First we prove that under a sufficient condition, for a certain sub-class of $\LSEM$ that are \emph{bow-free} (Brito and Pearl (2002)), the parameter recovery is stable.
no code implementations • ICLR 2019 • Naganand Yadati, Vikram Nitin, Madhav Nimishakavi, Prateek Yadav, Anand Louis, Partha Talukdar
Additionally, there is need to represent the direction from reactants to products.
1 code implementation • 7 Sep 2018 • Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, Partha Talukdar
In many real-world network datasets such as co-authorship, co-citation, email communication, etc., relationships are complex and go beyond pairwise.
no code implementations • 28 Apr 2018 • Amit Deshpande, Anand Louis, Apoorv Vikram Singh
On the hardness side we show that for any $\alpha' > 1$, there exists an $\alpha \leq \alpha'$, $(\alpha >1)$, and an $\varepsilon_0 > 0$ such that minimizing the $k$-means objective over clusterings that satisfy $\alpha$-center proximity is NP-hard to approximate within a multiplicative $(1+\varepsilon_0)$ factor.