1 code implementation • 15 Oct 2024 • Md Kowsher, Md. Shohanur Islam Sobuj, Nusrat Jahan Prottasha, E. Alejandro Alanis, Ozlem Ozmen Garibay, Niloofar Yousefi
Time series forecasting remains a challenging task, particularly in the context of complex multiscale temporal patterns.
1 code implementation • 14 Oct 2024 • Md Kowsher, Tara Esmaeilbeig, Chun-Nam Yu, Mojtaba Soltanalian, Niloofar Yousefi
We propose RoCoFT, a parameter-efficient fine-tuning method for large-scale language models (LMs) based on updating only a few rows and columns of the weight matrices in transformers.
no code implementations • 11 Oct 2024 • Nusrat Jahan Prottasha, Asif Mahmud, Md. Shohanur Islam Sobuj, Prakash Bhat, Md Kowsher, Niloofar Yousefi, Ozlem Ozmen Garibay
This method involves using a fixed LLM to understand and process the semantic content of the prompt through zero-shot capabilities.
1 code implementation • 4 Nov 2023 • Ali Khodabandeh Yalabadi, Mehdi Yazdani-Jahromi, Niloofar Yousefi, Aida Tayebi, Sina Abdidizaji, Ozlem Ozmen Garibay
Drug-Target Interaction (DTI) prediction is vital for drug discovery, yet challenges persist in achieving model interpretability and optimizing performance.
1 code implementation • Briefings in Bioinformatics 2022 • Mehdi Yazdani-Jahromi, Niloofar Yousefi, Aida Tayebi, Elayaraja Kolanthai, Craig J Neal, Sudipta Seal, Ozlem Ozmen Garibay
In this study, we introduce an interpretable graph-based deep learning prediction model, AttentionSiteDTI, which utilizes protein binding sites along with a self-attention mechanism to address the problem of drug–target interaction prediction.
Ranked #1 on Drug Discovery on BindingDB
no code implementations • 14 Apr 2020 • Toktam A. Oghaz, Ece C. Mutlu, Jasser Jasser, Niloofar Yousefi, Ivan Garibay
Our topic model identifies topics' recurrence over time with a varying time resolution.
no code implementations • 3 Mar 2020 • Marie Alaghband, Niloofar Yousefi, Ivan Garibay
Facial expressions are important parts of both gesture and sign language recognition systems.
no code implementations • 2 Dec 2019 • Niloofar Yousefi, Marie Alaghband, Ivan Garibay
Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research.
1 code implementation • 17 Oct 2019 • Ramya Akula, Niloofar Yousefi, Ivan Garibay
To understand human influence, information spread and evolution of transmitted information among assorted users in GitHub, we developed a deep neural network model: DeepFork, a supervised machine learning based approach that aims to predict information diffusion in complex social networks; considering node as well as topological features.
no code implementations • 3 May 2019 • Niloofar Yousefi, Farhad Hasankhani, Mahsa Kiani, Nooshin Yousefi
Determining the priority of outpatients and allocating the capacity based on the priority classes are important concepts that have to be considered in the scheduling of outpatients.
no code implementations • 11 Jul 2017 • Niloofar Yousefi, Cong Li, Mansooreh Mollaghasemi, Georgios Anagnostopoulos, Michael Georgiopoulos
As shown by our empirical results, our algorithm consistently outperforms the traditional kernel learning algorithms such as uniform combination solution, convex combinations of base kernels as well as some kernel alignment-based models, which have been proven to give promising results in the past.
no code implementations • 18 Feb 2016 • Niloofar Yousefi, Yunwen Lei, Marius Kloft, Mansooreh Mollaghasemi, Georgios Anagnostopoulos
We show a Talagrand-type concentration inequality for Multi-Task Learning (MTL), using which we establish sharp excess risk bounds for MTL in terms of distribution- and data-dependent versions of the Local Rademacher Complexity (LRC).
1 code implementation • 13 Aug 2015 • Niloofar Yousefi, Michael Georgiopoulos, Georgios C. Anagnostopoulos
When faced with learning a set of inter-related tasks from a limited amount of usable data, learning each task independently may lead to poor generalization performance.