1 code implementation • NeurIPS 2023 • Zhiqun Zuo, Mohammad Mahdi Khalili, Xueru Zhang
It was shown in \cite{kusner2017counterfactual} that a sufficient condition for satisfying CF is to \textbf{not} use features that are descendants of sensitive attributes in the causal graph.
1 code implementation • 7 Nov 2023 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan
Imposing EL on the learning process leads to a non-convex optimization problem even if the loss function is convex, and the existing fair learning algorithms cannot properly be adopted to find the fair predictor under the EL constraint.
no code implementations • 10 Oct 2023 • Tongxin Yin, Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data.
no code implementations • 9 Feb 2023 • Mahed Abroshan, Saumitra Mishra, Mohammad Mahdi Khalili
This composition can be represented in the form of a tree.
no code implementations • 24 Feb 2022 • Gholamali Aminian, Mahed Abroshan, Mohammad Mahdi Khalili, Laura Toni, Miguel R. D. Rodrigues
A common assumption in semi-supervised learning is that the labeled, unlabeled, and test data are drawn from the same distribution.
1 code implementation • NeurIPS 2021 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan
This observation implies that the fairness notions used in classification problems are not suitable for a selection problem where the applicants compete for a limited number of positions.
no code implementations • 29 Sep 2021 • Mahed Abroshan, Saumitra Mishra, Mohammad Mahdi Khalili
One approach for interpreting black-box machine learning models is to find a global approximation of the model using simple interpretable functions, which is called a metamodel (a model of the model).
no code implementations • 29 Sep 2021 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, Iman Vakilinia
In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex.
no code implementations • 26 Jan 2021 • Iman Vakilinia, Peyman Faizian, Mohammad Mahdi Khalili
Our proposed mechanism \textit{RewardRating} is inspired by the stock market model in which users can invest in their ratings for services and receive a reward based on future ratings.
Computer Science and Game Theory
no code implementations • 7 Dec 2020 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, Somayeh Sojoudi
In this work, we study the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models.
no code implementations • 8 Oct 2019 • Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
It can be shown that the privacy-accuracy tradeoff can be improved significantly compared with conventional ADMM.
no code implementations • NeurIPS 2019 • Xueru Zhang, Mohammad Mahdi Khalili, Cem Tekin, Mingyan Liu
Machine Learning (ML) models trained on data from multiple demographic groups can inherit representation disparity (Hashimoto et al., 2018) that may exist in the data: the model may be less favorable to groups contributing less to the training process; this in turn can degrade population retention in these groups over time, and exacerbate representation disparity in the long run.
no code implementations • 7 Oct 2018 • Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
Alternating direction method of multiplier (ADMM) is a powerful method to solve decentralized convex optimization problems.
no code implementations • ICML 2018 • Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu
Alternating direction method of multiplier (ADMM) is a popular method used to design distributed versions of a machine learning algorithm, whereby local computations are performed on local data with the output exchanged among neighbors in an iterative fashion.