Search Results for author: Aditya Siddhant

Found 22 papers, 6 papers with code

XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalisation

2 code implementations ICML 2020 Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson

However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing.

Retrieval Zero-Shot Cross-Lingual Transfer

Dialect-robust Evaluation of Generated Text

no code implementations2 Nov 2022 Jiao Sun, Thibault Sellam, Elizabeth Clark, Tu Vu, Timothy Dozat, Dan Garrette, Aditya Siddhant, Jacob Eisenstein, Sebastian Gehrmann

Evaluation metrics that are not robust to dialect variation make it impossible to tell how well systems perform for many groups of users, and can even penalize systems for producing text in lower-resource dialects.

Towards the Next 1000 Languages in Multilingual Machine Translation: Exploring the Synergy Between Supervised and Self-Supervised Learning

no code implementations9 Jan 2022 Aditya Siddhant, Ankur Bapna, Orhan Firat, Yuan Cao, Mia Xu Chen, Isaac Caswell, Xavier Garcia

While recent progress in massively multilingual MT is one step closer to reaching this goal, it is becoming evident that extending a multilingual MT system simply by training on more parallel data is unscalable, since the availability of labeled data for low-resource and non-English-centric language pairs is forbiddingly limited.

Machine Translation Self-Supervised Learning +1

nmT5 - Is parallel data still relevant for pre-training massively multilingual language models?

no code implementations ACL 2021 Mihir Kale, Aditya Siddhant, Rami Al-Rfou, Linting Xue, Noah Constant, Melvin Johnson

Recently, mT5 - a massively multilingual version of T5 - leveraged a unified text-to-text format to attain state-of-the-art results on a wide variety of multilingual NLP tasks.

Language Modelling Machine Translation +2

nmT5 -- Is parallel data still relevant for pre-training massively multilingual language models?

no code implementations3 Jun 2021 Mihir Kale, Aditya Siddhant, Noah Constant, Melvin Johnson, Rami Al-Rfou, Linting Xue

Recently, mT5 - a massively multilingual version of T5 - leveraged a unified text-to-text format to attain state-of-the-art results on a wide variety of multilingual NLP tasks.

Language Modelling Machine Translation +2

Distilling Large Language Models into Tiny and Effective Students using pQRNN

no code implementations21 Jan 2021 Prabhu Kaliamoorthi, Aditya Siddhant, Edward Li, Melvin Johnson

Our strong results suggest that our approach is great for latency-sensitive applications while being able to leverage large mBERT-like models.

Data Augmentation Semantic Parsing +1

mT5: A massively multilingual pre-trained text-to-text transformer

6 code implementations NAACL 2021 Linting Xue, Noah Constant, Adam Roberts, Mihir Kale, Rami Al-Rfou, Aditya Siddhant, Aditya Barua, Colin Raffel

The recent "Text-to-Text Transfer Transformer" (T5) leveraged a unified text-to-text format and scale to attain state-of-the-art results on a wide variety of English-language NLP tasks.

Common Sense Reasoning Natural Language Inference +3

Explicit Alignment Objectives for Multilingual Bidirectional Encoders

no code implementations NAACL 2021 Junjie Hu, Melvin Johnson, Orhan Firat, Aditya Siddhant, Graham Neubig

Pre-trained cross-lingual encoders such as mBERT (Devlin et al., 2019) and XLMR (Conneau et al., 2020) have proven to be impressively effective at enabling transfer-learning of NLP systems from high-resource languages to low-resource languages.

Retrieval Sentence Classification +2

XTREME: A Massively Multilingual Multi-task Benchmark for Evaluating Cross-lingual Generalization

3 code implementations24 Mar 2020 Junjie Hu, Sebastian Ruder, Aditya Siddhant, Graham Neubig, Orhan Firat, Melvin Johnson

However, these broad-coverage benchmarks have been mostly limited to English, and despite an increasing interest in multilingual models, a benchmark that enables the comprehensive evaluation of such methods on a diverse range of languages and tasks is still missing.

Cross-Lingual Transfer Retrieval

Evaluating the Cross-Lingual Effectiveness of Massively Multilingual Neural Machine Translation

no code implementations1 Sep 2019 Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Arivazhagan, Jason Riesa, Ankur Bapna, Orhan Firat, Karthik Raman

The recently proposed massively multilingual neural machine translation (NMT) system has been shown to be capable of translating over 100 languages to and from English within a single model.

Cross-Lingual Transfer Machine Translation +3

Unsupervised Transfer Learning for Spoken Language Understanding in Intelligent Agents

1 code implementation13 Nov 2018 Aditya Siddhant, Anuj Goyal, Angeliki Metallinou

Our findings suggest unsupervised pre-training on a large corpora of unlabeled utterances leads to significantly better SLU performance compared to training from scratch and it can even outperform conventional supervised transfer.

Language Modelling Spoken Language Understanding +2

Deep Bayesian Active Learning for Natural Language Processing: Results of a Large-Scale Empirical Study

no code implementations EMNLP 2018 Aditya Siddhant, Zachary C. Lipton

This paper provides a large scale empirical study of deep active learning, addressing multiple tasks and, for each, multiple datasets, multiple models, and a full suite of acquisition functions.

Active Learning Open-Ended Question Answering

Leveraging Native Language Speech for Accent Identification using Deep Siamese Networks

no code implementations25 Dec 2017 Aditya Siddhant, Preethi Jyothi, Sriram Ganapathy

The problem of automatic accent identification is important for several applications like speaker profiling and recognition as well as for improving speech recognition systems.

Speaker Profiling speech-recognition +1

Cannot find the paper you are looking for? You can Submit a new open access paper.