1 code implementation • 7 Nov 2023 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan
Imposing EL on the learning process leads to a non-convex optimization problem even if the loss function is convex, and the existing fair learning algorithms cannot properly be adopted to find the fair predictor under the EL constraint.
1 code implementation • 22 Mar 2023 • Alireza Abdollahpourrostam, Mahed Abroshan, Seyed-Mohsen Moosavi-Dezfooli
Our proposed attacks are also suitable for evaluating the robustness of large models and can be used to perform adversarial training (AT) to achieve state-of-the-art robustness to minimal l2 adversarial perturbations.
no code implementations • 2 Mar 2023 • Mahed Abroshan, Michael Burkhart, Oscar Giles, Sam Greenbury, Zoe Kourtzi, Jack Roberts, Mihaela van der Schaar, Jannetta S Steyn, Alan Wilson, May Yong
Machine learning techniques are effective for building predictive models because they identify patterns in large datasets.
no code implementations • 9 Feb 2023 • Mahed Abroshan, Saumitra Mishra, Mohammad Mahdi Khalili
This composition can be represented in the form of a tree.
no code implementations • 24 Feb 2022 • Gholamali Aminian, Mahed Abroshan, Mohammad Mahdi Khalili, Laura Toni, Miguel R. D. Rodrigues
A common assumption in semi-supervised learning is that the labeled, unlabeled, and test data are drawn from the same distribution.
1 code implementation • NeurIPS 2021 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan
This observation implies that the fairness notions used in classification problems are not suitable for a selection problem where the applicants compete for a limited number of positions.
no code implementations • 29 Sep 2021 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, Iman Vakilinia
In general, finding a fair predictor leads to a constrained optimization problem, and depending on the fairness notion, it may be non-convex.
no code implementations • 29 Sep 2021 • Mahed Abroshan, Saumitra Mishra, Mohammad Mahdi Khalili
One approach for interpreting black-box machine learning models is to find a global approximation of the model using simple interpretable functions, which is called a metamodel (a model of the model).
no code implementations • 8 Sep 2021 • Mahed Abroshan, Kai Hou Yip, Cem Tekin, Mihaela van der Schaar
Secondly, such datasets are usually imperfect, additionally cursed with missing values in the attributes of features.
no code implementations • 7 Dec 2020 • Mohammad Mahdi Khalili, Xueru Zhang, Mahed Abroshan, Somayeh Sojoudi
In this work, we study the possibility of using a differentially private exponential mechanism as a post-processing step to improve both fairness and privacy of supervised learning models.
1 code implementation • 18 May 2017 • Mahed Abroshan, Ramji Venkataramanan, Albert Guillen i Fabregas
Consider two remote nodes, each having a binary sequence.
Information Theory Information Theory