no code implementations • IWSLT (EMNLP) 2018 • Hirofumi Inaguma, Xuan Zhang, Zhiqi Wang, Adithya Renduchintala, Shinji Watanabe, Kevin Duh
This paper describes the Johns Hopkins University (JHU) and Kyoto University submissions to the Speech Translation evaluation campaign at IWSLT2018.
no code implementations • 12 Feb 2024 • Holger R. Roth, Ziyue Xu, Yuan-Ting Hsieh, Adithya Renduchintala, Isaac Yang, Zhihong Zhang, Yuhong Wen, Sean Yang, Kevin Lu, Kristopher Kersten, Camir Ricketts, Daguang Xu, Chester Chen, Yan Cheng, Andrew Feng
In the ever-evolving landscape of artificial intelligence (AI) and large language models (LLMs), handling and leveraging data effectively has become a critical challenge.
no code implementations • 16 Nov 2023 • Adithya Renduchintala, Tugrul Konuk, Oleksii Kuchaiev
We introduce Tied-LoRA, a novel paradigm leveraging weight tying and selective training to enhance the parameter efficiency of Low-rank Adaptation (LoRA).
no code implementations • EACL 2021 • Xiang Kong, Adithya Renduchintala, James Cross, Yuqing Tang, Jiatao Gu, Xian Li
Recent work in multilingual translation advances translation quality surpassing bilingual baselines using deep transformer models with increased capacity.
no code implementations • ACL 2021 • Adithya Renduchintala, Denise Diaz, Kenneth Heafield, Xian Li, Mona Diab
Is bias amplified when neural machine translation (NMT) models are optimized for speed and evaluated on generic test sets using BLEU?
1 code implementation • ACL 2021 • Wei-Jen Ko, Ahmed El-Kishky, Adithya Renduchintala, Vishrav Chaudhary, Naman Goyal, Francisco Guzmán, Pascale Fung, Philipp Koehn, Mona Diab
The scarcity of parallel data is a major obstacle for training high-quality machine translation systems for low-resource languages.
no code implementations • EMNLP 2021 • Ahmed El-Kishky, Adithya Renduchintala, James Cross, Francisco Guzmán, Philipp Koehn
Cross-lingual named-entity lexica are an important resource to multilingual NLP tasks such as machine translation and cross-lingual wikification.
no code implementations • ACL 2022 • Adithya Renduchintala, Adina Williams
Transformer based models are the modern work horses for neural machine translation (NMT), reaching state of the art across several benchmarks.
no code implementations • EACL 2021 • Yi-Lin Tuan, Ahmed El-Kishky, Adithya Renduchintala, Vishrav Chaudhary, Francisco Guzmán, Lucia Specia
Quality estimation aims to measure the quality of translated content without access to a reference translation.
1 code implementation • 29 Dec 2020 • Yilun Zhou, Adithya Renduchintala, Xian Li, Sida Wang, Yashar Mehdad, Asish Ghoshal
Active learning (AL) algorithms may achieve better performance with fewer data because the model guides the data selection process.
no code implementations • Asian Chapter of the Association for Computational Linguistics 2020 • Shuo Sun, Marina Fomicheva, Fr{\'e}d{\'e}ric Blain, Vishrav Chaudhary, Ahmed El-Kishky, Adithya Renduchintala, Francisco Guzm{\'a}n, Lucia Specia
Predicting the quality of machine translation has traditionally been addressed with language-specific models, under the assumption that the quality label distribution or linguistic features exhibit traits that are not shared across languages.
no code implementations • IJCNLP 2019 • Adithya Renduchintala, Philipp Koehn, Jason Eisner
We present a machine foreign-language teacher that modifies text in a student{'}s native language (L1) by replacing some word tokens with glosses in a foreign language (L2), in such a way that the student can acquire L2 vocabulary simply by reading the resulting macaronic text.
no code implementations • WS 2019 • Adithya Renduchintala, Philipp Koehn, Jason Eisner
We accomplish this by modifying a cloze language model to incrementally learn new vocabulary items, and use this language model as a proxy for the word guessing and learning ability of real students.
no code implementations • WS 2019 • Shuoyang Ding, Adithya Renduchintala, Kevin Duh
Most neural machine translation systems are built upon subword units extracted by methods such as Byte-Pair Encoding (BPE) or wordpiece.
no code implementations • 10 Dec 2018 • Matthew Wiesner, Adithya Renduchintala, Shinji Watanabe, Chunxi Liu, Najim Dehak, Sanjeev Khudanpur
Using transcribed speech from nearby languages gives a further 20-30% relative reduction in character error rate.
no code implementations • WS 2019 • Adithya Renduchintala, Pamela Shapiro, Kevin Duh, Philipp Koehn
Neural machine translation (NMT) systems operate primarily on words (or sub-words), ignoring lower-level patterns of morphology.
no code implementations • 30 Mar 2018 • Shinji Watanabe, Takaaki Hori, Shigeki Karita, Tomoki Hayashi, Jiro Nishitoba, Yuya Unno, Nelson Enrique Yalta Soplin, Jahn Heymann, Matthew Wiesner, Nanxin Chen, Adithya Renduchintala, Tsubasa Ochiai
This paper introduces a new open source platform for end-to-end speech processing named ESPnet.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +1
no code implementations • 27 Mar 2018 • Adithya Renduchintala, Shuoyang Ding, Matthew Wiesner, Shinji Watanabe
We present a new end-to-end architecture for automatic speech recognition (ASR) that can be trained using \emph{symbolic} input in addition to the traditional acoustic input.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • CONLL 2017 • Adithya Renduchintala, Philipp Koehn, Jason Eisner
We present a feature-rich knowledge tracing method that captures a student{'}s acquisition and retention of knowledge during a foreign language phrase learning task.