1 code implementation • 26 Jan 2025 • Ali Khodabandeh Yalabadi, Mehdi Yazdani-Jahromi, Ozlem Ozmen Garibay
To address alignment challenges, we introduce a method that relocates the center of mass of generated ligands to their docking poses, enabling accurate sub-component extraction.
2 code implementations • 21 Oct 2024 • Mehdi Yazdani-Jahromi, Ali Khodabandeh Yalabadi, Amirarsalan Rajabi, Aida Tayebi, Ivan Garibay, Ozlem Ozmen Garibay
The persistent challenge of bias in machine learning models necessitates robust solutions to ensure parity and equal treatment across diverse groups, particularly in classification tasks.
no code implementations • 16 Oct 2024 • Mehdi Yazdani-Jahromi, Mangal Prakash, Tommaso Mansi, Artem Moskalev, Rui Liao
Messenger RNA (mRNA) plays a crucial role in protein synthesis, with its codon structure directly impacting biological properties.
1 code implementation • 4 Nov 2023 • Ali Khodabandeh Yalabadi, Mehdi Yazdani-Jahromi, Niloofar Yousefi, Aida Tayebi, Sina Abdidizaji, Ozlem Ozmen Garibay
Drug-Target Interaction (DTI) prediction is vital for drug discovery, yet challenges persist in achieving model interpretability and optimizing performance.
no code implementations • 18 Sep 2022 • Amirarsalan Rajabi, Mehdi Yazdani-Jahromi, Ozlem Ozmen Garibay, Gita Sukthankar
In this study, we present a fast and effective model to de-bias an image dataset through reconstruction and minimizing the statistical dependence between intended variables.
1 code implementation • Briefings in Bioinformatics 2022 • Mehdi Yazdani-Jahromi, Niloofar Yousefi, Aida Tayebi, Elayaraja Kolanthai, Craig J Neal, Sudipta Seal, Ozlem Ozmen Garibay
In this study, we introduce an interpretable graph-based deep learning prediction model, AttentionSiteDTI, which utilizes protein binding sites along with a self-attention mechanism to address the problem of drug–target interaction prediction.
Ranked #1 on
Drug Discovery
on BindingDB
1 code implementation • 15 Mar 2022 • Mehdi Yazdani-Jahromi, Amirarsalan Rajabi, Ali Khodabandeh Yalabadi, Aida Tayebi, Ozlem Ozmen Garibay
There is an abundance of evidence suggesting that these models could contain or even amplify the bias present in the data on which they are trained, inherent to their objective function and learning algorithms; Many researchers direct their attention to this issue in different directions, namely, changing data to be statistically independent, adversarial training for restricting the capabilities of a particular competitor who aims to maximize parity, etc.