1 code implementation • 21 Oct 2024 • Mehdi Yazdani-Jahromi, Ali Khodabandeh Yalabadi, Amirarsalan Rajabi, Aida Tayebi, Ivan Garibay, Ozlem Ozmen Garibay
The persistent challenge of bias in machine learning models necessitates robust solutions to ensure parity and equal treatment across diverse groups, particularly in classification tasks.
no code implementations • 18 Sep 2022 • Amirarsalan Rajabi, Mehdi Yazdani-Jahromi, Ozlem Ozmen Garibay, Gita Sukthankar
In this study, we present a fast and effective model to de-bias an image dataset through reconstruction and minimizing the statistical dependence between intended variables.
1 code implementation • 15 Mar 2022 • Mehdi Yazdani-Jahromi, Amirarsalan Rajabi, Ali Khodabandeh Yalabadi, Aida Tayebi, Ozlem Ozmen Garibay
There is an abundance of evidence suggesting that these models could contain or even amplify the bias present in the data on which they are trained, inherent to their objective function and learning algorithms; Many researchers direct their attention to this issue in different directions, namely, changing data to be statistically independent, adversarial training for restricting the capabilities of a particular competitor who aims to maximize parity, etc.
1 code implementation • 2 Sep 2021 • Amirarsalan Rajabi, Ozlem Ozmen Garibay
In the unconstrained case, i. e. when the model is only trained in the first phase and is only meant to generate accurate data following the same joint probability distribution of the real data, the results show that the model beats state-of-the-art GANs proposed in the literature to produce synthetic tabular data.
1 code implementation • 19 Aug 2020 • Ece Çiğdem Mutlu, Toktam A. Oghaz, Jasser Jasser, Ege Tütüncüler, Amirarsalan Rajabi, Aida Tayebi, Ozlem Ozmen, Ivan Garibay
We expect this data set to be useful for many research purposes, including stance detection, evolution and dynamics of opinions regarding this outbreak, and changes in opinions in response to the exogenous shocks such as policy decisions and events.