Search Results for author: Niloofar Yousefi

Found 13 papers, 6 papers with code

RoCoFT: Efficient Finetuning of Large Language Models with Row-Column Updates

1 code implementation14 Oct 2024 Md Kowsher, Tara Esmaeilbeig, Chun-Nam Yu, Mojtaba Soltanalian, Niloofar Yousefi

We propose RoCoFT, a parameter-efficient fine-tuning method for large-scale language models (LMs) based on updating only a few rows and columns of the weight matrices in transformers.

parameter-efficient fine-tuning

FragXsiteDTI: Revealing Responsible Segments in Drug-Target Interaction with Transformer-Driven Interpretation

1 code implementation4 Nov 2023 Ali Khodabandeh Yalabadi, Mehdi Yazdani-Jahromi, Niloofar Yousefi, Aida Tayebi, Sina Abdidizaji, Ozlem Ozmen Garibay

Drug-Target Interaction (DTI) prediction is vital for drug discovery, yet challenges persist in achieving model interpretability and optimizing performance.

Benchmarking Drug Discovery

AttentionSiteDTI: an interpretable graph-based model for drug-target interaction prediction using NLP sentence-level relation classification

1 code implementation Briefings in Bioinformatics 2022 Mehdi Yazdani-Jahromi, Niloofar Yousefi, Aida Tayebi, Elayaraja Kolanthai, Craig J Neal, Sudipta Seal, Ozlem Ozmen Garibay

In this study, we introduce an interpretable graph-based deep learning prediction model, AttentionSiteDTI, which utilizes protein binding sites along with a self-attention mechanism to address the problem of drug–target interaction prediction.

Drug Discovery Relation Classification +2

A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection

no code implementations2 Dec 2019 Niloofar Yousefi, Marie Alaghband, Ivan Garibay

Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research.

BIG-bench Machine Learning Fraud Detection +1

DeepFork: Supervised Prediction of Information Diffusion in GitHub

1 code implementation17 Oct 2019 Ramya Akula, Niloofar Yousefi, Ivan Garibay

To understand human influence, information spread and evolution of transmitted information among assorted users in GitHub, we developed a deep neural network model: DeepFork, a supervised machine learning based approach that aims to predict information diffusion in complex social networks; considering node as well as topological features.

BIG-bench Machine Learning Link Prediction

Appointment scheduling model in healthcare using clustering algorithms

no code implementations3 May 2019 Niloofar Yousefi, Farhad Hasankhani, Mahsa Kiani, Nooshin Yousefi

Determining the priority of outpatients and allocating the capacity based on the priority classes are important concepts that have to be considered in the scheduling of outpatients.

Clustering Scheduling

Multi-Task Learning Using Neighborhood Kernels

no code implementations11 Jul 2017 Niloofar Yousefi, Cong Li, Mansooreh Mollaghasemi, Georgios Anagnostopoulos, Michael Georgiopoulos

As shown by our empirical results, our algorithm consistently outperforms the traditional kernel learning algorithms such as uniform combination solution, convex combinations of base kernels as well as some kernel alignment-based models, which have been proven to give promising results in the past.

Multi-Task Learning

Local Rademacher Complexity-based Learning Guarantees for Multi-Task Learning

no code implementations18 Feb 2016 Niloofar Yousefi, Yunwen Lei, Marius Kloft, Mansooreh Mollaghasemi, Georgios Anagnostopoulos

We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), using which we establish sharp excess risk bounds for MTL in terms of distribution- and data-dependent versions of the Local Rademacher Complexity (LRC).

Multi-Task Learning

Multi-Task Learning with Group-Specific Feature Space Sharing

1 code implementation13 Aug 2015 Niloofar Yousefi, Michael Georgiopoulos, Georgios C. Anagnostopoulos

When faced with learning a set of inter-related tasks from a limited amount of usable data, learning each task independently may lead to poor generalization performance.

Binary Classification Multi-Task Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.