no code implementations • COLING 2022 • Dushyant Singh Chauhan, Gopendra Vikram Singh, Aseem Arora, Asif Ekbal, Pushpak Bhattacharyya
We design a multitasking framework wherein we first propose a Context Transformer to capture the deep contextual relationships with the input utterances.
1 code implementation • 3 Aug 2021 • Dushyant Singh Chauhan, Gopendra Vikram Singh, Navonil Majumder, Amir Zadeh, Asif Ekbal, Pushpak Bhattacharyya, Louis-Philippe Morency, Soujanya Poria
We propose several strong multimodal baselines and show the importance of contextual and multimodal information for humor recognition in conversations.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Dushyant Singh Chauhan, Dhanush S R, Asif Ekbal, Pushpak Bhattacharyya
The main motivation of iTRM is to learn the relationship between the tasks to realize how they help each other.
no code implementations • ACL 2020 • Dushyant Singh Chauhan, Dhanush S R, Asif Ekbal, Pushpak Bhattacharyya
In this paper, we hypothesize that sarcasm is closely related to sentiment and emotion, and thereby propose a multi-task deep learning framework to solve all these three problems simultaneously in a multi-modal conversational scenario.
no code implementations • IJCNLP 2019 • Dushyant Singh Chauhan, Md. Shad Akhtar, Asif Ekbal, Pushpak Bhattacharyya
In this paper, we introduce a recurrent neural network based approach for the multi-modal sentiment and emotion analysis.
no code implementations • NAACL 2019 • Md. Shad Akhtar, Dushyant Singh Chauhan, Deepanway Ghosal, Soujanya Poria, Asif Ekbal, Pushpak Bhattacharyya
In this paper, we present a deep multi-task learning framework that jointly performs sentiment and emotion analysis both.