no code implementations • 13 Mar 2024 • Shubham Sharma, Sanghamitra Dutta, Emanuele Albini, Freddy Lecue, Daniele Magazzeni, Manuela Veloso
In this paper, we introduce the problem of feature \emph{reselection}, so that features can be selected with respect to secondary model performance characteristics efficiently even after a feature selection process has been done with respect to a primary objective.
no code implementations • 21 Jul 2023 • Faisal Hamman, Sanghamitra Dutta
This work presents an information-theoretic perspective to group fairness trade-offs in federated learning (FL) with respect to sensitive attributes, such as gender, race, etc.
1 code implementation • 19 May 2023 • Faisal Hamman, Erfaun Noorani, Saumitra Mishra, Daniele Magazzeni, Sanghamitra Dutta
There is an emerging interest in generating robust counterfactual explanations that would remain valid if the model is updated or changed even slightly.
no code implementations • 2 Feb 2023 • Akshaj Kumar Veldanda, Ivan Brugere, Sanghamitra Dutta, Alan Mishler, Siddharth Garg
Recent work has sought to train fair models without sensitive attributes on training data.
no code implementations • 3 Nov 2022 • Faisal Hamman, Jiahao Chen, Sanghamitra Dutta
In this paper, we first demonstrate that simply querying for fairness metrics, such as statistical parity and equalized odds can leak the protected attributes of individuals to the model developers.
no code implementations • 6 Jul 2022 • Sanghamitra Dutta, Jason Long, Saumitra Mishra, Cecilia Tilli, Daniele Magazzeni
In this work, we propose a novel strategy -- that we call RobX -- to generate robust counterfactuals for tree-based ensembles, e. g., XGBoost.
no code implementations • 29 Jun 2022 • Akshaj Kumar Veldanda, Ivan Brugere, Jiahao Chen, Sanghamitra Dutta, Alan Mishler, Siddharth Garg
We further show that MinDiff optimization is very sensitive to choice of batch size in the under-parameterized regime.
no code implementations • 16 Jun 2022 • Sanghamitra Dutta, Praveen Venkatesh, Pulkit Grover
If we have access to the decision-making model, one potential approach (inspired from intervention-based approaches in explainability literature) is to vary each individual feature (while keeping the others fixed) and use the resulting change in disparity to quantify its contribution.
1 code implementation • NeurIPS 2021 • Praveen Venkatesh, Sanghamitra Dutta, Neil Mehta, Pulkit Grover
Motivated by neuroscientific and clinical applications, we empirically examine whether observational measures of information flow can suggest interventions.
no code implementations • 30 Oct 2021 • Saumitra Mishra, Sanghamitra Dutta, Jason Long, Daniele Magazzeni
There exist several methods that aim to address the crucial task of understanding the behaviour of AI/ML models.
no code implementations • NAACL (TextGraphs) 2021 • Sanghamitra Dutta, Liang Ma, Tanay Kumar Saha, Di Lu, Joel Tetreault, Alejandro Jaimes
Recent works show that the graph structure of sentences, generated from dependency parsers, has potential for improving event detection.
no code implementations • 14 Jun 2020 • Sanghamitra Dutta, Praveen Venkatesh, Piotr Mardziel, Anupam Datta, Pulkit Grover
While quantifying disparity is essential, sometimes the needs of an occupation may require the use of certain features that are critical in a way that any disparity that can be explained by them might need to be exempted.
no code implementations • 23 Mar 2020 • Sanghamitra Dutta, Jianyu Wang, Gauri Joshi
Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in runtime as it waits for the slowest workers (stragglers).
no code implementations • ICML 2020 • Sanghamitra Dutta, Dennis Wei, Hazar Yueksel, Pin-Yu Chen, Sijia Liu, Kush R. Varshney
Moreover, the same classifier yields the lack of a trade-off with respect to ideal distributions while yielding a trade-off when accuracy is measured with respect to the given (possibly biased) dataset.
no code implementations • 4 Mar 2019 • Sanghamitra Dutta, Ziqian Bai, Tze Meng Low, Pulkit Grover
This work proposes the first strategy to make distributed training of neural networks resilient to computing errors, a problem that has remained unsolved despite being first posed in 1956 by von Neumann.
no code implementations • 27 Nov 2018 • Sanghamitra Dutta, Ziqian Bai, Haewon Jeong, Tze Meng Low, Pulkit Grover
First, we propose a novel coded matrix multiplication technique called Generalized PolyDot codes that advances on existing methods for coded matrix multiplication under storage and communication constraints.
no code implementations • 3 Mar 2018 • Sanghamitra Dutta, Gauri Joshi, Soumyadip Ghosh, Parijat Dube, Priya Nagpurkar
Distributed Stochastic Gradient Descent (SGD) when run in a synchronous manner, suffers from delays in waiting for the slowest learners (stragglers).
3 code implementations • 31 Jan 2018 • Sanghamitra Dutta, Mohammad Fahim, Farzin Haddadpour, Haewon Jeong, Viveck Cadambe, Pulkit Grover
We provide novel coded computation strategies for distributed matrix-matrix products that outperform the recent "Polynomial code" constructions in recovery threshold, i. e., the required number of successful workers.
Information Theory Distributed, Parallel, and Cluster Computing Information Theory
no code implementations • NeurIPS 2016 • Sanghamitra Dutta, Viveck Cadambe, Pulkit Grover
The fusion node can exploit this redundancy by completing the computation using outputs from only a subset of the processors, ignoring the stragglers.