Search Results for author: Sangha Kim

Found 14 papers, 1 papers with code

Language Model Augmented Monotonic Attention for Simultaneous Translation

no code implementations NAACL 2022 Sathish Reddy Indurthi, Mohd Abbas Zaidi, Beomseok Lee, Nikhil Kumar Lakumarapu, Sangha Kim

The state-of-the-art adaptive policies for Simultaneous Neural Machine Translation (SNMT) use monotonic attention to perform read/write decisions based on the partial source and target sequences.

Language Modelling Machine Translation +2

Label-Free Multi-Domain Machine Translation with Stage-wise Training

no code implementations6 May 2023 Fan Zhang, Mei Tu, Sangha Kim, Song Liu, Jinyao Yan

Our model is composed of three parts: a backbone model, a domain discriminator taking responsibility to discriminate data from different domains, and a set of experts that transfer the decoded features from generic to specific.

Machine Translation Translation

Monotonic Simultaneous Translation with Chunk-wise Reordering and Refinement

no code implementations WMT (EMNLP) 2021 Hyojung Han, Seokchan Ahn, Yoonjung Choi, Insoo Chung, Sangha Kim, Kyunghyun Cho

Recent work in simultaneous machine translation is often trained with conventional full sentence translation corpora, leading to either excessive latency or necessity to anticipate as-yet-unarrived words, when dealing with a language pair whose word orders significantly differ.

Machine Translation Sentence +2

Infusing Future Information into Monotonic Attention Through Language Models

1 code implementation7 Sep 2021 Mohd Abbas Zaidi, Sathish Indurthi, Beomseok Lee, Nikhil Kumar Lakumarapu, Sangha Kim

Simultaneous neural machine translation(SNMT) models start emitting the target sequence before they have processed the source sequence.

Language Modelling Machine Translation +2

Faster Re-translation Using Non-Autoregressive Model For Simultaneous Neural Machine Translation

no code implementations29 Dec 2020 Hyojung Han, Sathish Indurthi, Mohd Abbas Zaidi, Nikhil Kumar Lakumarapu, Beomseok Lee, Sangha Kim, Chanwoo Kim, Inchul Hwang

The current re-translation approaches are based on autoregressive sequence generation models (ReTA), which generate tar-get tokens in the (partial) translation sequentially.

Machine Translation TAR +1

Data Efficient Direct Speech-to-Text Translation with Modality Agnostic Meta-Learning

no code implementations11 Nov 2019 Sathish Indurthi, Houjeung Han, Nikhil Kumar Lakumarapu, Beomseok Lee, Insoo Chung, Sangha Kim, Chanwoo Kim

In the meta-learning phase, the parameters of the model are exposed to vast amounts of speech transcripts (e. g., English ASR) and text translations (e. g., English-German MT).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +6

Look Harder: A Neural Machine Translation Model with Hard Attention

no code implementations ACL 2019 Sathish Reddy Indurthi, Insoo Chung, Sangha Kim

Soft-attention based Neural Machine Translation (NMT) models have achieved promising results on several translation tasks.

Hard Attention Machine Translation +3

Cannot find the paper you are looking for? You can Submit a new open access paper.