Search Results for author: Yogesh Virkar

Found 9 papers, 2 papers with code

Findings of the IWSLT 2022 Evaluation Campaign

no code implementations IWSLT (ACL) 2022 Antonios Anastasopoulos, Loïc Barrault, Luisa Bentivogli, Marcely Zanon Boito, Ondřej Bojar, Roldano Cattoni, Anna Currey, Georgiana Dinu, Kevin Duh, Maha Elbayad, Clara Emmanuel, Yannick Estève, Marcello Federico, Christian Federmann, Souhir Gahbiche, Hongyu Gong, Roman Grundkiewicz, Barry Haddow, Benjamin Hsu, Dávid Javorský, Vĕra Kloudová, Surafel Lakew, Xutai Ma, Prashant Mathur, Paul McNamee, Kenton Murray, Maria Nǎdejde, Satoshi Nakamura, Matteo Negri, Jan Niehues, Xing Niu, John Ortega, Juan Pino, Elizabeth Salesky, Jiatong Shi, Matthias Sperber, Sebastian Stüker, Katsuhito Sudoh, Marco Turchi, Yogesh Virkar, Alexander Waibel, Changhan Wang, Shinji Watanabe

The evaluation campaign of the 19th International Conference on Spoken Language Translation featured eight shared tasks: (i) Simultaneous speech translation, (ii) Offline speech translation, (iii) Speech to speech translation, (iv) Low-resource speech translation, (v) Multilingual speech translation, (vi) Dialect speech translation, (vii) Formality control for speech translation, (viii) Isometric speech translation.

Speech-to-Speech Translation Translation

Speaker Diarization of Scripted Audiovisual Content

no code implementations4 Aug 2023 Yogesh Virkar, Brian Thompson, Rohit Paturi, Sundararajan Srinivasan, Marcello Federico

The media localization industry usually requires a verbatim script of the final film or TV production in order to create subtitles or dubbing scripts in a foreign language.

speaker-diarization Speaker Diarization +2

Improving Isochronous Machine Translation with Target Factors and Auxiliary Counters

no code implementations22 May 2023 Proyag Pal, Brian Thompson, Yogesh Virkar, Prashant Mathur, Alexandra Chronopoulou, Marcello Federico

To translate speech for automatic dubbing, machine translation needs to be isochronous, i. e. translated speech needs to be aligned with the source in terms of speech durations.

Machine Translation Translation

Dubbing in Practice: A Large Scale Study of Human Localization With Insights for Automatic Dubbing

1 code implementation23 Dec 2022 William Brannon, Yogesh Virkar, Brian Thompson

We investigate how humans perform the task of dubbing video content from one language into another, leveraging a novel corpus of 319. 57 hours of video from 54 professionally produced titles.

Translation

Prosodic Alignment for off-screen automatic dubbing

no code implementations6 Apr 2022 Yogesh Virkar, Marcello Federico, Robert Enyedi, Roberto Barra-Chicote

The goal of automatic dubbing is to perform speech-to-speech translation while achieving audiovisual coherence.

Speech-to-Speech Translation Translation

Isochrony-Aware Neural Machine Translation for Automatic Dubbing

no code implementations16 Dec 2021 Derek Tam, Surafel M. Lakew, Yogesh Virkar, Prashant Mathur, Marcello Federico

We introduce the task of isochrony-aware machine translation which aims at generating translations suitable for dubbing.

Machine Translation Sentence +1

Isometric MT: Neural Machine Translation for Automatic Dubbing

no code implementations16 Dec 2021 Surafel M. Lakew, Yogesh Virkar, Prashant Mathur, Marcello Federico

Automatic dubbing (AD) is among the machine translation (MT) use cases where translations should match a given length to allow for synchronicity between source and target speech.

Machine Translation Re-Ranking +2

Machine Translation Verbosity Control for Automatic Dubbing

no code implementations8 Oct 2021 Surafel M. Lakew, Marcello Federico, Yue Wang, Cuong Hoang, Yogesh Virkar, Roberto Barra-Chicote, Robert Enyedi

Automatic dubbing aims at seamlessly replacing the speech in a video document with synthetic speech in a different language.

Machine Translation Translation

Cannot find the paper you are looking for? You can Submit a new open access paper.