no code implementations • 25 Feb 2025 • Md Kowsher, Nusrat Jahan Prottasha, Prakash Bhat, Chun-Nam Yu, Mojtaba Soltanalian, Ivan Garibay, Ozlem Garibay, Chen Chen, Niloofar Yousefi
This paper argues that generating output tokens is more effective than using pooled representations for prediction tasks because token-level generation retains more mutual information.
no code implementations • 16 Feb 2025 • Shahriar Kabir Nahin, Rabindra Nath Nandi, Sagor Sarker, Quazi Sarwar Muhtaseem, Md Kowsher, Apu Chandraw Shill, Md Ibrahim, Mehadi Hasan Menon, Tareq Al Muntasir, Firoj Alam
There was a lack of benchmarking datasets to evaluate LLMs for Bangla.
no code implementations • 15 Feb 2025 • Nusrat Jahan Prottasha, Md Kowsher, Hafijur Raman, Israt Jahan Anny, Prakash Bhat, Ivan Garibay, Ozlem Garibay
In this paper, we present two high-quality open-source user profile datasets: one for profile construction and another for profile updating.
no code implementations • 30 Nov 2024 • Md Kowsher, Nusrat Jahan Prottasha, Chun-Nam Yu
The success of self-attention lies in its ability to capture long-range dependencies and enhance context understanding, but it is limited by its computational complexity and challenges in handling sequential data with inherent directionality.
1 code implementation • 15 Oct 2024 • Md Kowsher, Md. Shohanur Islam Sobuj, Nusrat Jahan Prottasha, E. Alejandro Alanis, Ozlem Ozmen Garibay, Niloofar Yousefi
Time series forecasting remains a challenging task, particularly in the context of complex multiscale temporal patterns.
1 code implementation • 14 Oct 2024 • Md Kowsher, Tara Esmaeilbeig, Chun-Nam Yu, Mojtaba Soltanalian, Niloofar Yousefi
We propose RoCoFT, a parameter-efficient fine-tuning method for large-scale language models (LMs) based on updating only a few rows and columns of the weight matrices in transformers.
no code implementations • 11 Oct 2024 • Nusrat Jahan Prottasha, Asif Mahmud, Md. Shohanur Islam Sobuj, Prakash Bhat, Md Kowsher, Niloofar Yousefi, Ozlem Ozmen Garibay
This method involves using a fixed LLM to understand and process the semantic content of the prompt through zero-shot capabilities.
1 code implementation • 17 Sep 2024 • Md Kowsher, Nusrat Jahan Prottasha, Prakash Bhat
The rapid advancements in Large Language Models (LLMs) have revolutionized natural language processing (NLP) and related fields.
no code implementations • 14 Feb 2024 • Md Kowsher, Abdul Rafae Khan, Jia Xu
We introduce Group Reservoir Transformer to predict long-term events more accurately and robustly by overcoming two challenges in Chaos: (1) the extensive historical sequences and (2) the sensitivity to initial conditions.
no code implementations • 4 Nov 2022 • Nusrat Jahan Prottasha, Saydul Akbar Murad, Abu Jafar Md Muzahid, Masud Rana, Md Kowsher, Apurba Adhikary, Sujit Biswas, Anupam Kumar Bairagi
This algorithm is remarkable for learning from the competitive situation and the competition comes from the effects of autonomous features.