This paper presents our submissions to the IWSLT 2022 Isometric Spoken Language Translation task.
The paper presents the HW-TSC’s pipeline and results of Offline Speech to Speech Translation for IWSLT 2022.
The cascade system is composed of a chunking-based streaming ASR model and the SimulMT model used in the T2T track.
For machine translation part, we pretrained three translation models on WMT21 dataset and fine-tuned them on in-domain corpora.
This paper presents the submission of Huawei Translation Service Center (HW-TSC) to WMT 2021 Triangular MT Shared Task.
This paper presents the submission of Huawei Translate Services Center (HW-TSC) to the WMT 2021 News Translation Shared Task.
This paper presents our work in the WMT 2020 Word and Sentence-Level Post-Editing Quality Estimation (QE) Shared Task.
The paper presents the submission by HW-TSC in the WMT 2020 Automatic Post Editing Shared Task.
We also conduct experiment with similar language augmentation, which lead to positive results, although not used in our submission.
This paper describes our work in the WAT 2020 Indic Multilingual Translation Task.
We propose a unified multilingual model for humor detection which can be trained under a transfer learning framework.
Compared with the commonly used NuQE baseline, BAL-QE achieves 47% (En-Ru) and 75% (En-De) of performance promotions.
This paper describes the submission of Huawei Translation Service Center (HW-TSC) to WMT21 biomedical translation task in two language pairs: Chinese↔English and German↔English (Our registered team name is HuaweiTSC).
This paper presents the submission of Huawei Translation Services Center (HW-TSC) to WMT 2021 Efficiency Shared Task.
This paper presents the submission of Huawei Translation Services Center (HW-TSC) to the WMT 2021 Large-Scale Multilingual Translation Task.
To this end, we propose a plug-in algorithm for this line of work, i. e., Aligned Constrained Training (ACT), which alleviates this problem by familiarizing the model with the source-side context of the constraints.
However, in terms of ultimately achieved system performance for target speaker(s), the actual benefits of model pre-training are uncertain and unstable, depending very much on the quantity and text content of training data.
Input to these classifiers are speech transcripts produced by automatic speech recognition (ASR) models.
This paper describes our work in participation of the IWSLT-2021 offline speech translation task.
This paper describes a novel design of a neural network-based speech generation model for learning prosodic representation. The problem of representation learning is formulated according to the information bottleneck (IB) principle.
The paper presents details of our system in the IWSLT Video Speech Translation evaluation.