Different experiments on three publicly available datasets show the efficiency of the proposed approach with respect to state-of-art models.
We describe our submission to the CogALex-VI shared task on the identification of multilingual paradigmatic relations building on XLM-RoBERTa (XLM-R), a robustly optimized and multilingual BERT model.
We propose three distinct models to identify hope speech in English, Tamil and Malayalam language to serve this purpose.
In the task, datasets provided in three languages including Tamil, Malayalam and Kannada code-mixed with English where participants are asked to implement separate models for each language.
Our model achieves high accuracy for classification on this dataset and outperforms the previous model for multilingual text classification, highlighting language independence of McM.