1 code implementation • 12 Feb 2024 • David Selby, Kai Spriestersbach, Yuichiro Iwashita, Dennis Bappert, Archana Warrier, Sumantrak Mukherjee, Muhammad Nabeel Asim, Koichi Kise, Sebastian Vollmer
Large language models (LLMs) have been extensively studied for their abilities to generate convincing natural language sequences, however their utility for quantitative information retrieval is less well understood.
no code implementations • 29 Oct 2023 • Pervaiz Iqbal Khan, Muhammad Nabeel Asim, Andreas Dengel, Sheraz Ahmed
Following the need for an optimal language model competent in extracting useful patterns from social media text, the key goal of this paper is to train language models in such a way that they learn to derive generalized patterns.
no code implementations • 11 Mar 2020 • Faiza Memood, Muhammad Usman Ghani, Muhammad Ali Ibrahim, Rehab Shehzadi, Muhammad Nabeel Asim
In order to accelerate the performance of various Natural Language Processing tasks for Roman Urdu, this paper for the very first time provides 3 neural word embeddings prepared using most widely used approaches namely Word2vec, FastText, and Glove.
no code implementations • 3 Mar 2020 • Muhammad Nabeel Asim, Muhammad Usman Ghani, Muhammad Ali Ibrahim, Sheraz Ahmad, Waqar Mahmood, Andreas Dengel
Second, it investigates the performance impact of traditional machine learning based Urdu text document classification methodologies by embedding 10 filter-based feature selection algorithms which have been widely used for other languages.
1 code implementation • 12 Sep 2019 • Muhammad Nabeel Asim, Muhammad Usman Ghani Khan, Muhammad Imran Malik, Andreas Dengel, Sheraz Ahmed
Evaluation results reveal that the proposed methodology outperforms the state-of-the-art of both the (traditional) machine learning and deep learning based text document classification methodologies with a significant margin of 7. 7% on 20 Newsgroups and 6. 6% on BBC news datasets.