no code implementations • 13 Feb 2024 • Shaeke Salman, Md Montasir Bin Shams, Xiuwen Liu, Lingjiong Zhu
Transformer-based models have dominated natural language processing and other areas in the last few years due to their superior (zero-shot) performance on benchmark datasets.
no code implementations • 28 Jan 2024 • Shaeke Salman, Md Montasir Bin Shams, Xiuwen Liu
Pre-trained large foundation models play a central role in the recent surge of artificial intelligence, resulting in fine-tuned models with remarkable abilities when measured on benchmark datasets, standard exams, and applications.