no code implementations • EMNLP 2020 • Aakriti Budhraja, Madhura Pande, Preksha Nema, Pratyush Kumar, Mitesh M. Khapra
Given the success of Transformer-based models, two directions of study have emerged: interpreting role of individual attention heads and down-sizing the models for efficiency.
no code implementations • 11 Feb 2025 • Alan Saji, Jaavid Aktar Husain, Thanmay Jayakumar, Raj Dabre, Anoop Kunchukuttan, Mitesh M. Khapra, Ratish Puduppully
For non-Latin script languages, we investigate the role of romanization - the representation of non-Latin scripts using Latin characters - as a bridge in multilingual processing.
1 code implementation • 13 Jan 2025 • Oikantik Nath, Hanani Bathina, Mohammed Safi Ur Rahman Khan, Mitesh M. Khapra
To address this gap, we introduce FERMAT, a benchmark designed to assess the ability of VLMs to detect, localize and correct errors in handwritten mathematical content.
1 code implementation • 28 Nov 2024 • Sanjay Suryanarayanan, Haiyue Song, Mohammed Safi Ur Rahman Khan, Anoop Kunchukuttan, Mitesh M. Khapra, Raj Dabre
To address the challenge of aligning documents using sentence and chunk-level alignments, we propose a novel scoring method, Document Alignment Coefficient (DAC).
no code implementations • 19 Nov 2024 • Praveen Srinivasa Varadhan, Amogh Gulati, Ashwin Sankar, Srija Anand, Anirudh Gupta, Anirudh Mukherjee, Shiva Kumar Marepally, Ankur Bhatia, Saloni Jaju, Suvrat Bhooshan, Mitesh M. Khapra
The MUSHRA test is a promising alternative for evaluating multiple TTS systems simultaneously, but in this work we show that its reliance on matching human reference speech unduly penalises the scores of modern TTS systems that can exceed human speech quality.
no code implementations • 23 Oct 2024 • Srija Anand, Praveen Srinivasa Varadhan, Mehak Singal, Mitesh M. Khapra
We propose this methodology as a viable alternative for languages with limited access to high-quality data, enabling them to collectively benefit from shared resources.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+5
1 code implementation • 17 Oct 2024 • Sumanth Doddapaneni, Mohammed Safi Ur Rahman Khan, Dilip Venkatesh, Raj Dabre, Anoop Kunchukuttan, Mitesh M. Khapra
This would enable benchmarking of general-purpose multilingual LLMs and facilitate meta-evaluation of Evaluator LLMs.
no code implementations • 26 Aug 2024 • Kaushal Santosh Bhogale, Deovrat Mehendale, Niharika Parasa, Sathish Kumar Reddy G, Tahir Javed, Pratyush Kumar, Mitesh M. Khapra
In this study, we tackle the challenge of limited labeled data for low-resource languages in ASR, focusing on Hindi.
1 code implementation • 21 Aug 2024 • Tahir Javed, Janki Nawale, Sakshi Joshi, Eldho George, Kaushal Bhogale, Deovrat Mehendale, Mitesh M. Khapra
Hindi, one of the most spoken language of India, exhibits a diverse array of accents due to its usage among individuals from diverse linguistic origins.
1 code implementation • 19 Jul 2024 • Praveen Srinivasa Varadhan, Ashwin Sankar, Giri Raju, Mitesh M. Khapra
We release Rasa, the first multilingual expressive TTS dataset for any Indian language, which contains 10 hours of neutral speech and 1-3 hours of expressive speech for each of the 6 Ekman emotions covering 3 languages: Assamese, Bengali, & Tamil.
1 code implementation • 18 Jul 2024 • Srija Anand, Praveen Srinivasa Varadhan, Ashwin Sankar, Giri Raju, Mitesh M. Khapra
Publicly available TTS datasets for low-resource languages like Hindi and Tamil typically contain 10-20 hours of data, leading to poor vocabulary coverage.
1 code implementation • 8 Jul 2024 • Nandini Mundra, Aditya Nanda Kishore, Raj Dabre, Ratish Puduppully, Anoop Kunchukuttan, Mitesh M. Khapra
Language Models (LMs) excel in natural language processing tasks for English but show reduced performance in most other languages.
1 code implementation • 19 Jun 2024 • Sumanth Doddapaneni, Mohammed Safi Ur Rahman Khan, Sshubam Verma, Mitesh M. Khapra
Large Language Models (LLMs) are increasingly relied upon to evaluate text outputs of other LLMs, thereby influencing leaderboards and development decisions.
1 code implementation • 11 Mar 2024 • Mohammed Safi Ur Rahman Khan, Priyam Mehta, Ananth Sankar, Umashankar Kumaravelan, Sumanth Doddapaneni, Suriyaprasaad B, Varun Balan G, Sparsh Jain, Anoop Kunchukuttan, Pratyush Kumar, Raj Dabre, Mitesh M. Khapra
We hope that the datasets, tools, and resources released as a part of this work will not only propel the research and development of Indic LLMs but also establish an open-source blueprint for extending such efforts to other languages.
1 code implementation • 26 Jan 2024 • Jay Gala, Thanmay Jayakumar, Jaavid Aktar Husain, Aswanth Kumar M, Mohammed Safi Ur Rahman Khan, Diptesh Kanojia, Ratish Puduppully, Mitesh M. Khapra, Raj Dabre, Rudra Murthy, Anoop Kunchukuttan
We announce the initial release of "Airavata," an instruction-tuned LLM for Hindi.
1 code implementation • 25 May 2023 • Jay Gala, Pranjal A. Chitale, Raghavan AK, Varun Gumma, Sumanth Doddapaneni, Aswanth Kumar, Janki Nawale, Anupama Sujatha, Ratish Puduppully, Vivek Raghavan, Pratyush Kumar, Mitesh M. Khapra, Raj Dabre, Anoop Kunchukuttan
Prior to this work, there was (i) no parallel training data spanning all 22 languages, (ii) no robust benchmarks covering all these languages and containing content relevant to India, and (iii) no existing translation models which support all the 22 scheduled languages of India.
no code implementations • 25 May 2023 • Tahir Javed, Sakshi Joshi, Vignesh Nagarajan, Sai Sundaresan, Janki Nawale, Abhigyan Raman, Kaushal Bhogale, Pratyush Kumar, Mitesh M. Khapra
India is the second largest English-speaking country in the world with a speaker base of roughly 130 million.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+1
1 code implementation • 25 May 2023 • Yash Madhani, Mitesh M. Khapra, Anoop Kunchukuttan
We create publicly available language identification (LID) datasets and models in all 22 Indian languages listed in the Indian constitution in both native-script and romanized text.
1 code implementation • 24 May 2023 • Kaushal Santosh Bhogale, Sai Sundaresan, Abhigyan Raman, Tahir Javed, Mitesh M. Khapra, Pratyush Kumar
In this paper, we focus on Indian languages, and make the case that diverse benchmarks are required to evaluate and improve ASR systems for Indian languages.
2 code implementations • 12 May 2023 • Nandini Mundra, Sumanth Doddapaneni, Raj Dabre, Anoop Kunchukuttan, Ratish Puduppully, Mitesh M. Khapra
However, adapters have not been sufficiently analyzed to understand if PEFT translates to benefits in training/deployment efficiency and maintainability/extensibility.
Natural Language Understanding
parameter-efficient fine-tuning
1 code implementation • 20 Dec 2022 • Ananya B. Sai, Vignesh Nagarajan, Tanay Dixit, Raj Dabre, Anoop Kunchukuttan, Pratyush Kumar, Mitesh M. Khapra
In this paper, we fill this gap by creating an MQM dataset consisting of 7000 fine-grained annotations, spanning 5 Indian languages and 7 MT systems, and use it to establish correlations between annotator scores and scores obtained using existing automatic metrics.
1 code implementation • 20 Dec 2022 • Arnav Mhaske, Harshit Kedia, Sumanth Doddapaneni, Mitesh M. Khapra, Pratyush Kumar, Rudra Murthy V, Anoop Kunchukuttan
The dataset contains more than 400k sentences annotated with a total of at least 100k entities from three standard entity categories (Person, Location, and, Organization) for 9 out of the 11 languages.
1 code implementation • 11 Dec 2022 • Sumanth Doddapaneni, Rahul Aralikatte, Gowtham Ramesh, Shreya Goyal, Mitesh M. Khapra, Anoop Kunchukuttan, Pratyush Kumar
Across languages and tasks, IndicXTREME contains a total of 105 evaluation sets, of which 52 are new contributions to the literature.
2 code implementations • 17 Nov 2022 • Gokul Karthik Kumar, Praveen S V, Pratyush Kumar, Mitesh M. Khapra, Karthik Nandakumar
We open-source all models on the Bhashini platform.
Ranked #1 on
Speech Synthesis - Hindi
on IndicTTS
no code implementations • 26 Aug 2022 • Kaushal Santosh Bhogale, Abhigyan Raman, Tahir Javed, Sumanth Doddapaneni, Anoop Kunchukuttan, Pratyush Kumar, Mitesh M. Khapra
Significantly, we show that adding Shrutilipi to the training set of Wav2Vec models leads to an average decrease in WER of 5. 8\% for 7 languages on the IndicSUPERB benchmark.
1 code implementation • 24 Aug 2022 • Tahir Javed, Kaushal Santosh Bhogale, Abhigyan Raman, Anoop Kunchukuttan, Pratyush Kumar, Mitesh M. Khapra
We hope IndicSUPERB contributes to the progress of developing speech language understanding models for Indian languages.
Automatic Speech Recognition
Automatic Speech Recognition (ASR)
+6
2 code implementations • 6 May 2022 • Yash Madhani, Sushane Parthan, Priyanka Bedekar, Gokul NC, Ruchi Khapra, Anoop Kunchukuttan, Pratyush Kumar, Mitesh M. Khapra
Transliteration is very important in the Indian language context due to the usage of multiple scripts and the widespread use of romanized inputs.
no code implementations • COLING 2020 • Emil Biju, Anirudh Sriram, Mitesh M. Khapra, Pratyush Kumar
Gesture typing is a method of typing words on a touch-based keyboard by creating a continuous trace passing through the relevant keys.
no code implementations • 12 Mar 2022 • Shreya Goyal, Sumanth Doddapaneni, Mitesh M. Khapra, Balaraman Ravindran
In the past few years, it has become increasingly evident that deep neural networks are not resilient enough to withstand adversarial perturbations in input data, leaving them vulnerable to attack.
1 code implementation • ACL 2022 • Akash Kumar Mohankumar, Mitesh M. Khapra
In this work, we introduce Active Evaluation, a framework to efficiently identify the top-ranked system by actively choosing system pairs for comparison using dueling bandit algorithms.
no code implementations • 10 Mar 2022 • Aman Kumar, Himani Shrotriya, Prachi Sahu, Raj Dabre, Ratish Puduppully, Anoop Kunchukuttan, Amogh Mishra, Mitesh M. Khapra, Pratyush Kumar
Natural Language Generation (NLG) for non-English languages is hampered by the scarcity of datasets in these languages.
no code implementations • 6 Nov 2021 • Tahir Javed, Sumanth Doddapaneni, Abhigyan Raman, Kaushal Santosh Bhogale, Gowtham Ramesh, Anoop Kunchukuttan, Pratyush Kumar, Mitesh M. Khapra
Second, using this raw speech data we pretrain several variants of wav2vec style models for 40 Indian languages.
no code implementations • 9 Oct 2021 • Sahana Ramnath, Preksha Nema, Deep Sahni, Mitesh M. Khapra
As neural-network-based QA models become deeper and more complex, there is a demand for robust frameworks which can access a model's rationale for its prediction.
no code implementations • 26 Sep 2021 • Aakriti Budhraja, Madhura Pande, Pratyush Kumar, Mitesh M. Khapra
Large multilingual models, such as mBERT, have shown promise in crosslingual transfer.
1 code implementation • EMNLP 2021 • Ananya B. Sai, Tanay Dixit, Dev Yashpal Sheth, Sreyas Mohan, Mitesh M. Khapra
Natural Language Generation (NLG) evaluation is a multifaceted task requiring assessment of multiple desirable criteria, e. g., fluency, coherency, coverage, relevance, adequacy, overall quality, etc.
3 code implementations • Findings (ACL) 2022 • Raj Dabre, Himani Shrotriya, Anoop Kunchukuttan, Ratish Puduppully, Mitesh M. Khapra, Pratyush Kumar
We present IndicBART, a multilingual, sequence-to-sequence pre-trained model focusing on 11 Indic languages and English.
no code implementations • 1 Jul 2021 • Sumanth Doddapaneni, Gowtham Ramesh, Mitesh M. Khapra, Anoop Kunchukuttan, Pratyush Kumar
Multilingual Language Models (\MLLMs) such as mBERT, XLM, XLM-R, \textit{etc.}
Joint Multilingual Sentence Representations
Multilingual text classification
+5
no code implementations • NAACL 2021 • Mitesh M. Khapra, Ananya B. Sai
(iv) What are the criticisms and shortcomings of existing metrics?
1 code implementation • 22 Jan 2021 • Madhura Pande, Aakriti Budhraja, Preksha Nema, Pratyush Kumar, Mitesh M. Khapra
There are two main challenges with existing methods for classification: (a) there are no standard scores across studies or across functional roles, and (b) these scores are often average quantities measured across sentences without capturing statistical significance.
1 code implementation • ICCV 2021 • Dev Yashpal Sheth, Sreyas Mohan, Joshua L. Vincent, Ramon Manzorro, Peter A. Crozier, Mitesh M. Khapra, Eero P. Simoncelli, Carlos Fernandez-Granda
This is advantageous because motion compensation is computationally expensive, and can be unreliable when the input data are noisy.
Ranked #5 on
Video Denoising
on Set8 sigma40
1 code implementation • Findings of the Association for Computational Linguistics 2020 • Divyanshu Kakwani, Anoop Kunchukuttan, Satish Golla, Gokul N.C., Avik Bhattacharyya, Mitesh M. Khapra, Pratyush Kumar.
These resources include: (a) large-scale sentence-level monolingual corpora, (b) pre-trained word embeddings, (c) pre-trained language models, and (d) multiple NLU evaluation datasets (IndicGLUE benchmark).
1 code implementation • EMNLP 2020 • Sahana Ramnath, Preksha Nema, Deep Sahni, Mitesh M. Khapra
BERT and its variants have achieved state-of-the-art performance in various NLP tasks.
no code implementations • 1 Oct 2020 • Ameet Deshpande, Mitesh M. Khapra
Recent advances in Generative Adversarial Networks (GANs) have resulted in its widespread applications to multiple domains.
1 code implementation • 23 Sep 2020 • Ananya B. Sai, Akash Kumar Mohankumar, Siddhartha Arora, Mitesh M. Khapra
However, no such data is publicly available, and hence existing models are usually trained using a single relevant response and multiple randomly selected responses from other contexts (random negatives).
no code implementations • 27 Aug 2020 • Ananya B. Sai, Akash Kumar Mohankumar, Mitesh M. Khapra
The expanding number of NLG models and the shortcomings of the current metrics has led to a rapid surge in the number of evaluation metrics proposed since 2014.
no code implementations • 13 Aug 2020 • Madhura Pande, Aakriti Budhraja, Preksha Nema, Pratyush Kumar, Mitesh M. Khapra
We show that a larger fraction of heads have a locality bias as compared to a syntactic bias.
no code implementations • 5 Jul 2020 • Pritha Ganguly, Nitesh Methani, Mitesh M. Khapra, Pratyush Kumar
However, the performance drops drastically when evaluated at a stricter IOU of 0. 9 with the best model giving a mAP of 35. 70%.
1 code implementation • WS 2020 • Nikita Moghe, Priyesh Vijayan, Balaraman Ravindran, Mitesh M. Khapra
This requires capturing structural, sequential and semantic information from the conversation context and the background resources.
2 code implementations • 30 Apr 2020 • Anoop Kunchukuttan, Divyanshu Kakwani, Satish Golla, Gokul N. C., Avik Bhattacharyya, Mitesh M. Khapra, Pratyush Kumar
We present the IndicNLP corpus, a large-scale, general-domain corpus containing 2. 7 billion words for 10 Indian languages from two language families.
2 code implementations • ACL 2020 • Akash Kumar Mohankumar, Preksha Nema, Sharan Narasimhan, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran
To make attention mechanisms more faithful and plausible, we propose a modified LSTM cell with a diversity-driven training objective that ensures that the hidden representations learned at different time steps are diverse.
no code implementations • 3 Nov 2019 • Sahana Ramnath, Amrita Saha, Soumen Chakrabarti, Mitesh M. Khapra
With the prolification of multimodal interaction in various domains, recently there has been much interest in text based image retrieval in the computer vision community.
no code implementations • 3 Sep 2019 • Nitesh Methani, Pritha Ganguly, Mitesh M. Khapra, Pratyush Kumar
However, in practice, this is an unrealistic assumption because many questions require reasoning and thus have real-valued answers which appear neither in a small fixed size vocabulary nor in the image.
Ranked #2 on
Chart Question Answering
on RealCQA
1 code implementation • IJCNLP 2019 • Preksha Nema, Akash Kumar Mohankumar, Mitesh M. Khapra, Balaji Vasan Srinivasan, Balaraman Ravindran
It is desired that the generated question should be (i) grammatically correct (ii) answerable from the passage and (iii) specific to the given answer.
no code implementations • NAACL 2019 • Siddhartha Arora, Mitesh M. Khapra, Harish G. Ramaswamy
In order to overcome this, we use standard simple models which do not capture all pairwise interactions, but learn to emulate certain characteristics of a complex teacher network.
no code implementations • ICLR 2019 • Suman Banerjee, Mitesh M. Khapra
Domain specific goal-oriented dialogue systems typically require modeling three types of inputs, viz., (i) the knowledge-base associated with the domain, (ii) the history of the conversation, which is a sequence of utterances and (iii) the current utterance for which the response needs to be generated.
no code implementations • ICLR 2019 • Ameet Deshpande, Mitesh M. Khapra
Recent advances in Generative Adversarial Networks facilitated by improvements to the framework and successful application to various problems has resulted in extensions to multiple domains.
no code implementations • 4 Apr 2019 • Soham Parikh, Ananya B. Sai, Preksha Nema, Mitesh M. Khapra
We believe that the non-adversarial dataset created as a part of this work would complement the research on adversarial evaluation and give a more realistic assessment of the ability of RC models.
1 code implementation • ICLR 2018 • Soham Parikh, Ananya B. Sai, Preksha Nema, Mitesh M. Khapra
Specifically, it has gates which decide whether an option can be eliminated given the passage, question pair and if so it tries to make the passage representation orthogonal to this eliminated option (akin to ignoring portions of the passage corresponding to the eliminated option).
1 code implementation • CVPR 2019 • Shweta Bhardwaj, Mukundhan Srinivasan, Mitesh M. Khapra
We focus on building compute-efficient video classification models which process fewer frames and hence have less number of FLOPs.
Ranked #2 on
Video Classification
on YouTube-8M
no code implementations • 23 Feb 2019 • Ananya B. Sai, Mithun Das Gupta, Mitesh M. Khapra, Mukundhan Srinivasan
ADEM(Lowe et al. 2017) formulated the automatic evaluation of dialogue systems as a learning problem and showed that such a model was able to predict responses which correlate significantly with human judgements, both at utterance and system level.
1 code implementation • 26 Dec 2018 • Deepak Mittal, Shweta Bhardwaj, Mitesh M. Khapra, Balaraman Ravindran
In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned.
no code implementations • NeurIPS 2018 • Anirban Laha, Saneem A. Chemmengath, Priyanka Agrawal, Mitesh M. Khapra, Karthik Sankaranarayanan, Harish G. Ramaswamy
Converting an n-dimensional vector to a probability distribution over n objects is a commonly used component in many machine learning tasks like multiclass classification, multilabel classification, attention mechanisms etc.
1 code implementation • EMNLP 2018 • Nikita Moghe, Siddhartha Arora, Suman Banerjee, Mitesh M. Khapra
Existing dialog datasets contain a sequence of utterances and responses without any explicit background knowledge associated with them.
1 code implementation • EMNLP 2018 • Preksha Nema, Mitesh M. Khapra
In particular, it is important to verify whether such metrics used for evaluating AQG systems focus on answerability of the generated question by preferring questions which contain all relevant information such as question type (Wh-types), entities, relations, etc.
no code implementations • COLING 2018 • Suman Banerjee, Nikita Moghe, Siddhartha Arora, Mitesh M. Khapra
("Can you help me in booking a table at this restaurant?").
no code implementations • 12 Jun 2018 • Revanth Reddy, Rahul Ramesh, Ameet Deshpande, Mitesh M. Khapra
Deep Learning has managed to push boundaries in a wide variety of tasks.
1 code implementation • 31 May 2018 • Priyesh Vijayan, Yash Chandak, Mitesh M. Khapra, Srinivasan Parthasarathy, Balaraman Ravindran
State-of-the-art models for node classification on such attributed graphs use differentiable recursive functions that enable aggregation and filtering of neighborhood information from multiple hops.
1 code implementation • 31 May 2018 • Priyesh Vijayan, Yash Chandak, Mitesh M. Khapra, Srinivasan Parthasarathy, Balaraman Ravindran
Given a graph where every node has certain attributes associated with it and some nodes have labels associated with them, Collective Classification (CC) is the task of assigning labels to every unlabeled node using information from the node as well as its neighbors.
no code implementations • 12 May 2018 • Shweta Bhardwaj, Mitesh M. Khapra
We then train a student network whose objective is to process only a small fraction of the frames in the video and still produce a representation which is very close to the representation computed by the teacher network.
1 code implementation • ACL 2018 • Amrita Saha, Rahul Aralikatte, Mitesh M. Khapra, Karthik Sankaranarayanan
We propose DuoRC, a novel dataset for Reading Comprehension (RC) that motivates several new challenges for neural approaches in language understanding beyond those offered by existing RC datasets.
1 code implementation • NAACL 2018 • Parag Jain, Anirban Laha, Karthik Sankaranarayanan, Preksha Nema, Mitesh M. Khapra, Shreyas Shetty
Structured data summarization involves generation of natural language summaries from structured input data.
2 code implementations • NAACL 2018 • Preksha Nema, Shreyas Shetty, Parag Jain, Anirban Laha, Karthik Sankaranarayanan, Mitesh M. Khapra
For example, while generating descriptions from a table, a human would attend to information at two levels: (i) the fields (macro level) and (ii) the values within the field (micro level).
1 code implementation • 31 Jan 2018 • Deepak Mittal, Shweta Bhardwaj, Mitesh M. Khapra, Balaraman Ravindran
In this work, we report experiments which suggest that the comparable performance of the pruned network is not due to the specific criterion chosen but due to the inherent plasticity of deep neural networks which allows them to recover from the loss of pruned filters once the rest of the filters are fine-tuned.
1 code implementation • 31 Jan 2018 • Amrita Saha, Vardaan Pahuja, Mitesh M. Khapra, Karthik Sankaranarayanan, Sarath Chandar
Further, unlike existing large scale QA datasets which contain simple questions that can be answered from a single tuple, the questions in our dialogs require a larger subgraph of the KG.
no code implementations • EACL 2017 • Sathish Reddy, Dinesh Raghu, Mitesh M. Khapra, Sachindra Joshi
To generate such QA pairs, we first extract a set of keywords from entities and relationships expressed in a triple stored in the knowledge graph.
no code implementations • COLING 2016 • Amrita Saha, Mitesh M. Khapra, Sarath Chandar, Janarthanan Rajendran, Kyunghyun Cho
However, there is no parallel training data available between X and Y but, training data is available between X & Z and Z & Y (as is often the case in many real world applications).
1 code implementation • NAACL 2016 • Janarthanan Rajendran, Mitesh M. Khapra, Sarath Chandar, Balaraman Ravindran
In this work, we address a real-world scenario where no direct parallel data is available between two views of interest (say, $V_1$ and $V_2$) but parallel data is available between each of these views and a pivot view ($V_3$).
2 code implementations • 10 Oct 2015 • Janarthanan Rajendran, Aravind Srinivas, Mitesh M. Khapra, P. Prasanna, Balaraman Ravindran
Second, the agent should be able to selectively transfer, which is the ability to select and transfer from different and multiple source tasks for different parts of the state space of the target task.
2 code implementations • 27 Apr 2015 • Sarath Chandar, Mitesh M. Khapra, Hugo Larochelle, Balaraman Ravindran
CCA based approaches learn a joint representation by maximizing correlation of the views when projected to the common subspace.
no code implementations • LREC 2014 • Mitesh M. Khapra, Ananthakrishnan Ramanathan, Anoop Kunchukuttan, Karthik Visweswariah, Pushpak Bhattacharyya
In contrast, we propose a low-cost QC mechanism which is fair to both workers and requesters.
no code implementations • NeurIPS 2014 • Sarath Chandar A P, Stanislas Lauly, Hugo Larochelle, Mitesh M. Khapra, Balaraman Ravindran, Vikas Raykar, Amrita Saha
Cross-language learning allows us to use training data from one language to build models for a different language.
no code implementations • LREC 2012 • Anoop Kunchukuttan, Shourya Roy, Pratik Patel, Kushal Ladha, Somya Gupta, Mitesh M. Khapra, Pushpak Bhattacharyya
The logistics of collecting resources for Machine Translation (MT) has always been a cause of concern for some of the resource deprived languages of the world.