no code implementations • 15 Jul 2022 • Prerona Tarannum, Firoj Alam, Md. Arid Hasan, Sheak Rashed Haider Noori
In further experiments, our evaluation shows that transformer models (BERT-m and XLM-RoBERTa-base) outperform the SVM and RF in Dutch and English languages where a different scenario is observed for Spanish.