1 code implementation • 19 May 2023 • Mayank Mishra, Prince Kumar, Riyaz Bhat, Rudra Murthy V, Danish Contractor, Srikanth Tamilselvam
Prompting with natural language instructions has recently emerged as a popular method of harnessing the capabilities of large language models.
no code implementations • 2 Mar 2023 • Tamali Banerjee, Rudra Murthy V, Pushpak Bhattacharyya
We aim to investigate whether UNMT approaches with self-supervised pre-training are robust to word-order divergence between language pairs.
no code implementations • 3 Jan 2023 • Rudra Murthy V, Riyaz Bhat, Chulaka Gunasekara, Siva Sankalp Patel, Hui Wan, Tejas Indulal Dhamecha, Danish Contractor, Marina Danilevsky
In this paper we explore the task of modeling semi-structured object sequences; in particular, we focus our attention on the problem of developing a structure-aware input representation for such sequences.
1 code implementation • 20 Dec 2022 • Arnav Mhaske, Harshit Kedia, Sumanth Doddapaneni, Mitesh M. Khapra, Pratyush Kumar, Rudra Murthy V, Anoop Kunchukuttan
The dataset contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location, and, Organization) for 9 out of the 11 languages.
1 code implementation • EMNLP 2021 • Tejas Indulal Dhamecha, Rudra Murthy V, Samarth Bharadwaj, Karthik Sankaranarayanan, Pushpak Bhattacharyya
We hypothesize and validate that multilingual fine-tuning of pre-trained language models can yield better performance on downstream NLP applications, compared to models fine-tuned on individual languages.
Multiple Choice Question Answering (MCQA) Natural Language Inference +3
no code implementations • MTSummit 2021 • Tamali Banerjee, Rudra Murthy V, Pushpak Bhattacharyya
In this paper, we show that initializing the embedding layer of UNMT models with cross-lingual embeddings shows significant improvements in BLEU score over existing approaches with embeddings randomly initialized.
no code implementations • MTSummit 2021 • Tamali Banerjee, Rudra Murthy V, Pushpak Bhattacharyya
We hypothesise that the reason behind \textit{scrambled translation problem} is 'shuffling noise' which is introduced in every input sentence as a denoising strategy.
no code implementations • NAACL 2019 • Rudra Murthy V, Anoop Kunchukuttan, Pushpak Bhattacharyya
To bridge this divergence, We propose to pre-order the assisting language sentence to match the word order of the source language and train the parent model.
no code implementations • 1 Jul 2016 • Rudra Murthy V, Mitesh Khapra, Pushpak Bhattacharyya
In this paper, we propose a neural network based model which allows sharing the decoder as well as word and character level parameters between two languages thereby allowing a resource fortunate language to aid a resource deprived language.