no code implementations • WMT (EMNLP) 2021 • Jyotsana Khatri, Rudra Murthy, Pushpak Bhattacharyya
This paper describes our submission for the shared task on Unsupervised MT and Very Low Resource Supervised MT at WMT 2021.
no code implementations • ACL (WAT) 2021 • Jyotsana Khatri, Nikhil Saini, Pushpak Bhattacharyya
Multilingual Neural Machine Translation has achieved remarkable performance by training a single translation model for multiple languages.
1 code implementation • LREC 2022 • Rudra Murthy, Pallab Bhattacharjee, Rahul Sharnagat, Jyotsana Khatri, Diptesh Kanojia, Pushpak Bhattacharyya
We use different language models to perform the sequence labelling task for NER and show the efficacy of our data by performing a comparative evaluation with models trained on another dataset available for the Hindi NER task.
Ranked #1 on Named Entity Recognition (NER) on HiNER-original
no code implementations • COLING 2020 • Jyotsana Khatri, Pushpak Bhattacharyya
Our approach gives more weight to good pseudo parallel sentence pairs in the back-translation phase.
no code implementations • WS 2020 • Nikhil Saini, Jyotsana Khatri, Preethi Jyothi, Pushpak Bhattacharyya
We also make use of additional fluent text in the target language to help generate fluent translations.
no code implementations • WS 2019 • Jyotsana Khatri, Pushpak Bhattacharyya
This paper describes our submission to Shared Task on Similar Language Translation in Fourth Conference on Machine Translation (WMT 2019).