Search Results for author: Manasi Patwardhan

Found 9 papers, 0 papers with code

Domain Adaptation for NMT via Filtered Iterative Back-Translation

no code implementations EACL (AdaptNLP) 2021 Surabhi Kumari, Nikhil Jaiswal, Mayur Patidar, Manasi Patwardhan, Shirish Karande, Puneet Agarwal, Lovekesh Vig

In comparison, in this work, we observe that a simpler filtering approach based on a domain classifier, applied only to the pseudo-training data can consistently perform better, providing performance gains of 1. 40, 1. 82 and 0. 76 in terms of BLEU score for Medical, Law and IT in one direction, and 1. 28, 1. 60 and 1. 60 in the other direction in low resource scenario over competitive baselines.

Domain Adaptation Machine Translation +2

Performance of BERT on Persuasion for Good

no code implementations ICON 2021 Saumajit Saha, Kanika Kalra, Manasi Patwardhan, Shirish Karande

We consider the task of automatically classifying the persuasion strategy employed by an utterance in a dialog.

Acceleron: A Tool to Accelerate Research Ideation

no code implementations7 Mar 2024 Harshit Nigam, Manasi Patwardhan, Lovekesh Vig, Gautam Shroff

To aid with research ideation, we propose `Acceleron', a research accelerator for different phases of the research life cycle, and which is specially designed to aid the ideation process.

Neuro-symbolic Zero-Shot Code Cloning with Cross-Language Intermediate Representation

no code implementations26 Apr 2023 Krishnam Hasija, Shrishti Pradhan, Manasi Patwardhan, Raveendra Kumar Medicherla, Lovekesh Vig, Ravindra Naik

We further fine-tune UnixCoder, the best-performing model for zero-shot cross-programming language code search, for the Code Cloning task with the SBT IRs of C code-pairs, available in the CodeNet dataset.

C++ code Code Search

Knowledge Transfer for Pseudo-code Generation from Low Resource Programming Language

no code implementations16 Mar 2023 Ankita Sontakke, Kanika Kalra, Manasi Patwardhan, Lovekesh Vig, Raveendra Kumar Medicherla, Ravindra Naik, Shrishti Pradhan

In this paper, we focus on transferring the knowledge acquired by the code-to-pseudocode neural model trained on a high resource PL (C++) using parallel code-pseudocode data.

Code Generation Transfer Learning +1

From Monolingual to Multilingual FAQ Assistant using Multilingual Co-training

no code implementations WS 2019 Mayur Patidar, Surabhi Kumari, Manasi Patwardhan, Kar, Shirish e, Puneet Agarwal, Lovekesh Vig, Gautam Shroff

We observe that the proposed approach provides consistent gains in the performance of BERT for multiple benchmark datasets (e. g. 1. 0{\%} gain on MLDocs, and 1. 2{\%} gain on XNLI over translate-train with BERT), while requiring a single model for multiple languages.

Cross-Lingual Transfer

Cannot find the paper you are looking for? You can Submit a new open access paper.