Search Results for author: Md Montasir Bin Shams

Found 2 papers, 0 papers with code

Intriguing Differences Between Zero-Shot and Systematic Evaluations of Vision-Language Transformer Models

no code implementations13 Feb 2024 Shaeke Salman, Md Montasir Bin Shams, Xiuwen Liu, Lingjiong Zhu

Transformer-based models have dominated natural language processing and other areas in the last few years due to their superior (zero-shot) performance on benchmark datasets.

Language Modelling Zero-Shot Learning

Intriguing Equivalence Structures of the Embedding Space of Vision Transformers

no code implementations28 Jan 2024 Shaeke Salman, Md Montasir Bin Shams, Xiuwen Liu

Pre-trained large foundation models play a central role in the recent surge of artificial intelligence, resulting in fine-tuned models with remarkable abilities when measured on benchmark datasets, standard exams, and applications.

Cannot find the paper you are looking for? You can Submit a new open access paper.