Search Results for author: Thenmozhi D.

Found 8 papers, 0 papers with code

ssn_diBERTsity@LT-EDI-EACL2021:Hope Speech Detection on multilingual YouTube comments via transformer based approach

no code implementations EACL (LTEDI) 2021 Arunima S, Akshay Ramakrishnan, Avantika Balaji, Thenmozhi D., Senthil Kumar B

In recent times, there exists an abundance of research to classify abusive and offensive texts focusing on negative comments but only minimal research using the positive reinforcement approach.

Hope Speech Detection

Ssn\_nlp at SemEval 2020 Task 12: Offense Target Identification in Social Media Using Traditional and Deep Machine Learning Approaches

no code implementations SEMEVAL 2020 Thenmozhi D., Nandhinee P.r., Arunima S., Amlan Sengupta

Offensive language identification (OLI) in user generated text is automatic detection of any profanity, insult, obscenity, racism or vulgarity that is addressed towards an individual or a group.

Hate Speech Detection Language Identification

SSN\_NLP at SemEval-2020 Task 7: Detecting Funniness Level Using Traditional Learning with Sentence Embeddings

no code implementations SEMEVAL 2020 Kayalvizhi S, Thenmozhi D., Aravindan Chandrabose

For subtask 2, Universal sentence encoder classifier achieves the highest accuracy for development set and Multi-Layer Perceptron applied on vectors vectorized using universal sentence encoder embeddings for the test set.

Sentence Embeddings

Sarcasm Identification and Detection in Conversion Context using BERT

no code implementations WS 2020 Kalaivani A., Thenmozhi D.

As an immense growth of social media, sarcasm analysis helps to avoid insult, hurts and humour to affect someone.

Sarcasm Detection

SSN\_NLP at SemEval-2019 Task 3: Contextual Emotion Identification from Textual Conversation using Seq2Seq Deep Neural Network

no code implementations SEMEVAL 2019 Senthil Kumar B., Thenmozhi D., Ch, Aravindan rabose, Srinethe Sharavanan

We have evaluated our approach on the EmoContext@SemEval2019 dataset and we have obtained the micro-averaged F1 scores as 0. 595 and 0. 6568 for the pre-evaluation dataset and final evaluation test set respectively.

Cannot find the paper you are looking for? You can Submit a new open access paper.