no code implementations • EACL (AdaptNLP) 2021 • Surabhi Kumari, Nikhil Jaiswal, Mayur Patidar, Manasi Patwardhan, Shirish Karande, Puneet Agarwal, Lovekesh Vig
In comparison, in this work, we observe that a simpler filtering approach based on a domain classifier, applied only to the pseudo-training data can consistently perform better, providing performance gains of 1. 40, 1. 82 and 0. 76 in terms of BLEU score for Medical, Law and IT in one direction, and 1. 28, 1. 60 and 1. 60 in the other direction in low resource scenario over competitive baselines.
no code implementations • ICON 2021 • Saumajit Saha, Kanika Kalra, Manasi Patwardhan, Shirish Karande
We consider the task of automatically classifying the persuasion strategy employed by an utterance in a dialog.
no code implementations • AACL (WAT) 2020 • Nikhil Jaiswal, Mayur Patidar, Surabhi Kumari, Manasi Patwardhan, Shirish Karande, Puneet Agarwal, Lovekesh Vig
This is further followed by fine-tuning on the domain-specific corpus.
no code implementations • 7 Mar 2024 • Harshit Nigam, Manasi Patwardhan, Lovekesh Vig, Gautam Shroff
To aid with research ideation, we propose `Acceleron', a research accelerator for different phases of the research life cycle, and which is specially designed to aid the ideation process.
no code implementations • 1 Aug 2023 • Aseem Arora, Shabbirhussain Bhaisaheb, Harshit Nigam, Manasi Patwardhan, Lovekesh Vig, Gautam Shroff
Cross-domain and cross-compositional generalization of Text-to-SQL semantic parsing is a challenging task.
no code implementations • 26 Apr 2023 • Krishnam Hasija, Shrishti Pradhan, Manasi Patwardhan, Raveendra Kumar Medicherla, Lovekesh Vig, Ravindra Naik
We further fine-tune UnixCoder, the best-performing model for zero-shot cross-programming language code search, for the Code Cloning task with the SBT IRs of C code-pairs, available in the CodeNet dataset.
no code implementations • 16 Mar 2023 • Ankita Sontakke, Kanika Kalra, Manasi Patwardhan, Lovekesh Vig, Raveendra Kumar Medicherla, Ravindra Naik, Shrishti Pradhan
In this paper, we focus on transferring the knowledge acquired by the code-to-pseudocode neural model trained on a high resource PL (C++) using parallel code-pseudocode data.
no code implementations • ACL 2020 • Kanika Kalra, Bhargav Kurma, Silpa Vadakkeeveetil Sreelatha, Manasi Patwardhan, Kar, Shirish e
This achieves an accuracy of 89. 69{\%}, which is an improvement of 4. 7{\%}.
no code implementations • WS 2019 • Mayur Patidar, Surabhi Kumari, Manasi Patwardhan, Kar, Shirish e, Puneet Agarwal, Lovekesh Vig, Gautam Shroff
We observe that the proposed approach provides consistent gains in the performance of BERT for multiple benchmark datasets (e. g. 1. 0{\%} gain on MLDocs, and 1. 2{\%} gain on XNLI over translate-train with BERT), while requiring a single model for multiple languages.