Search Results for author: Mohammad Mahdi Khalili

Found 14 papers, 3 papers with code

Counterfactually Fair Representation

1 code implementation NeurIPS 2023 Zhiqun Zuo, Mohammad Mahdi Khalili, Xueru Zhang

It was shown in \cite{kusner2017counterfactual} that a sufficient condition for satisfying CF is to \textbf{not} use features that are descendants of sensitive attributes in the causal graph.

counterfactual Fairness

Loss Balancing for Fair Supervised Learning

1 code implementation7 Nov 2023 Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan

Imposing EL on the learning process leads to a non-convex optimization problem even if the loss function is convex, and the existing fair learning algorithms cannot properly be adopted to find the fair predictor under the EL constraint.

Face Recognition Fairness

Federated Learning with Reduced Information Leakage and Computation

no code implementations10 Oct 2023 Tongxin Yin, Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu

Federated learning (FL) is a distributed learning paradigm that allows multiple decentralized clients to collaboratively learn a common model without sharing local data.

Federated Learning Privacy Preserving

An Information-theoretical Approach to Semi-supervised Learning under Covariate-shift

no code implementations24 Feb 2022 Gholamali Aminian, Mahed Abroshan, Mohammad Mahdi Khalili, Laura Toni, Miguel R. D. Rodrigues

A common assumption in semi-supervised learning is that the labeled, unlabeled, and test data are drawn from the same distribution.

Fair Sequential Selection Using Supervised Learning Models

1 code implementation NeurIPS 2021 Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan

This observation implies that the fairness notions used in classification problems are not suitable for a selection problem where the applicants compete for a limited number of positions.

Fairness

Interpreting Black-boxes Using Primitive Parameterized Functions

no code implementations29 Sep 2021 Mahed Abroshan, Saumitra Mishra, Mohammad Mahdi Khalili

One approach for interpreting black-box machine learning models is to find a global approximation of the model using simple interpretable functions, which is called a metamodel (a model of the model).

Feature Importance Symbolic Regression

Non-convex Optimization for Learning a Fair Predictor under Equalized Loss Fairness Constraint

no code implementations29 Sep 2021 Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, Iman Vakilinia

In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex.

Face Recognition Fairness

RewardRating: A Mechanism Design Approach to Improve Rating Systems

no code implementations26 Jan 2021 Iman Vakilinia, Peyman Faizian, Mohammad Mahdi Khalili

Our proposed mechanism \textit{RewardRating} is inspired by the stock market model in which users can invest in their ratings for services and receive a reward based on future ratings.

Computer Science and Game Theory

Improving Fairness and Privacy in Selection Problems

no code implementations7 Dec 2020 Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, Somayeh Sojoudi

In this work, we study the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models.

Decision Making Fairness

Recycled ADMM: Improving the Privacy and Accuracy of Distributed Algorithms

no code implementations8 Oct 2019 Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu

It can be shown that the privacy-accuracy tradeoff can be improved significantly compared with conventional ADMM.

Group Retention when Using Machine Learning in Sequential Decision Making: the Interplay between User Dynamics and Fairness

no code implementations NeurIPS 2019 Xueru Zhang, Mohammad Mahdi Khalili, Cem Tekin, Mingyan Liu

Machine Learning (ML) models trained on data from multiple demographic groups can inherit representation disparity (Hashimoto et al., 2018) that may exist in the data: the model may be less favorable to groups contributing less to the training process; this in turn can degrade population retention in these groups over time, and exacerbate representation disparity in the long run.

Decision Making Fairness

Recycled ADMM: Improve Privacy and Accuracy with Less Computation in Distributed Algorithms

no code implementations7 Oct 2018 Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu

Alternating direction method of multiplier (ADMM) is a powerful method to solve decentralized convex optimization problems.

Improving the Privacy and Accuracy of ADMM-Based Distributed Algorithms

no code implementations ICML 2018 Xueru Zhang, Mohammad Mahdi Khalili, Mingyan Liu

Alternating direction method of multiplier (ADMM) is a popular method used to design distributed versions of a machine learning algorithm, whereby local computations are performed on local data with the output exchanged among neighbors in an iterative fashion.

Cannot find the paper you are looking for? You can Submit a new open access paper.