no code implementations • WMT (EMNLP) 2020 • Jiwan Kim, Soyoon Park, Sangha Kim, Yoonjung Choi
This paper describes our submission to the WMT20 news translation shared task in English to Japanese direction.
no code implementations • NAACL 2022 • Sathish Reddy Indurthi, Mohd Abbas Zaidi, Beomseok Lee, Nikhil Kumar Lakumarapu, Sangha Kim
The state-of-the-art adaptive policies for Simultaneous Neural Machine Translation (SNMT) use monotonic attention to perform read/write decisions based on the partial source and target sequences.
no code implementations • 6 May 2023 • Fan Zhang, Mei Tu, Sangha Kim, Song Liu, Jinyao Yan
Our model is composed of three parts: a backbone model, a domain discriminator taking responsibility to discriminate data from different domains, and a set of experts that transfer the decoded features from generic to specific.
no code implementations • WMT (EMNLP) 2021 • Hyojung Han, Seokchan Ahn, Yoonjung Choi, Insoo Chung, Sangha Kim, Kyunghyun Cho
Recent work in simultaneous machine translation is often trained with conventional full sentence translation corpora, leading to either excessive latency or necessity to anticipate as-yet-unarrived words, when dealing with a language pair whose word orders significantly differ.
no code implementations • 13 Oct 2021 • Mohd Abbas Zaidi, Beomseok Lee, Sangha Kim, Chanwoo Kim
Hence, read/write decision policy remains the same across different input modalities, i. e., speech and text.
1 code implementation • 7 Sep 2021 • Mohd Abbas Zaidi, Sathish Indurthi, Beomseok Lee, Nikhil Kumar Lakumarapu, Sangha Kim
Simultaneous neural machine translation(SNMT) models start emitting the target sequence before they have processed the source sequence.
no code implementations • ICASSP 2021 • Sathish Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Hyojung Han, Seokchan Ahn, Sangha Kim, Chanwoo Kim, Inchul Hwang
In general, the direct Speech-to-text translation (ST) is jointly trained with Automatic Speech Recognition (ASR), and Machine Translation (MT) tasks.
Ranked #1 on Speech-to-Text Translation on MuST-C EN->DE (using extra training data)
Automatic Speech Recognition Automatic Speech Recognition (ASR) +5
no code implementations • 1 Jan 2021 • Sathish Reddy Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Hyojung Han, Sangha Kim, Inchul Hwang
Inspired by these learning patterns in humans, we suggest a simple yet generic task aware framework to incorporate into existing joint learning strategies.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +4
no code implementations • 29 Dec 2020 • Hyojung Han, Sathish Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Sangha Kim, Chanwoo Kim, Inchul Hwang
The current re-translation approaches are based on autoregressive sequence generation models (ReTA), which generate tar-get tokens in the (partial) translation sequentially.
no code implementations • Findings of the Association for Computational Linguistics 2020 • Insoo Chung, Byeongwook Kim, Yoonjung Choi, Se Jung Kwon, Yongkweon Jeon, Baeseong Park, Sangha Kim, Dongsoo Lee
Our analysis shows that for a given number of quantization bits, each block of Transformer contributes to translation quality and inference computations in different manners.
no code implementations • WS 2020 • Hou Jeung Han, Mohd Abbas Zaidi, Sathish Reddy Indurthi, Nikhil Kumar Lakumarapu, Beomseok Lee, Sangha Kim
In this paper, we describe end-to-end simultaneous speech-to-text and text-to-text translation systems submitted to IWSLT2020 online translation challenge.
no code implementations • WS 2020 • Nikhil Kumar Lakumarapu, Beomseok Lee, Sathish Reddy Indurthi, Hou Jeung Han, Mohd Abbas Zaidi, Sangha Kim
In this paper, we describe the system submitted to the IWSLT 2020 Offline Speech Translation Task.
Ranked #3 on Speech-to-Text Translation on MuST-C EN->DE (using extra training data)
Automatic Speech Recognition Automatic Speech Recognition (ASR) +6
no code implementations • 11 Nov 2019 • Sathish Indurthi, Houjeung Han, Nikhil Kumar Lakumarapu, Beomseok Lee, Insoo Chung, Sangha Kim, Chanwoo Kim
In the meta-learning phase, the parameters of the model are exposed to vast amounts of speech transcripts (e. g., English ASR) and text translations (e. g., English-German MT).
Automatic Speech Recognition Automatic Speech Recognition (ASR) +6
no code implementations • ACL 2019 • Sathish Reddy Indurthi, Insoo Chung, Sangha Kim
Soft-attention based Neural Machine Translation (NMT) models have achieved promising results on several translation tasks.