Search Results for author: Nikhil Kumar Lakumarapu

Found 8 papers, 1 papers with code

Language Model Augmented Monotonic Attention for Simultaneous Translation

no code implementations NAACL 2022 Sathish Reddy Indurthi, Mohd Abbas Zaidi, Beomseok Lee, Nikhil Kumar Lakumarapu, Sangha Kim

The state-of-the-art adaptive policies for Simultaneous Neural Machine Translation (SNMT) use monotonic attention to perform read/write decisions based on the partial source and target sequences.

Language Modelling Machine Translation +2

Infusing Future Information into Monotonic Attention Through Language Models

1 code implementation7 Sep 2021 Mohd Abbas Zaidi, Sathish Indurthi, Beomseok Lee, Nikhil Kumar Lakumarapu, Sangha Kim

Simultaneous neural machine translation(SNMT) models start emitting the target sequence before they have processed the source sequence.

Language Modelling Machine Translation +2

Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation

no code implementations29 Dec 2020 Hyojung Han, Sathish Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Sangha Kim, Chanwoo Kim, Inchul Hwang

The current re-translation approaches are based on autoregressive sequence generation models (ReTA), which generate tar-get tokens in the (partial) translation sequentially.

Machine Translation Translation

Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning

no code implementations11 Nov 2019 Sathish Indurthi, Houjeung Han, Nikhil Kumar Lakumarapu, Beomseok Lee, Insoo Chung, Sangha Kim, Chanwoo Kim

In the meta-learning phase, the parameters of the model are exposed to vast amounts of speech transcripts (e. g., English ASR) and text translations (e. g., English-German MT).

Automatic Speech Recognition Machine Translation +5

Cannot find the paper you are looking for? You can Submit a new open access paper.