Search Results for author: Ioannis Tsiamas

Found 9 papers, 7 papers with code

Pretrained Speech Encoders and Efficient Fine-tuning Methods for Speech Translation: UPC at IWSLT 2022

1 code implementation IWSLT (ACL) 2022 Ioannis Tsiamas, Gerard I. Gállego, Carlos Escolano, José Fonollosa, Marta R. Costa-jussà

We further investigate the suitability of different speech encoders (wav2vec 2. 0, HuBERT) for our models and the impact of knowledge distillation from the Machine Translation model that we use for the decoder (mBART).

Knowledge Distillation Machine Translation +2

Pushing the Limits of Zero-shot End-to-End Speech Translation

1 code implementation16 Feb 2024 Ioannis Tsiamas, Gerard I. Gállego, José A. R. Fonollosa, Marta R. Costa-jussà

The speech encoder seamlessly integrates with the MT model at inference, enabling direct translation from speech to text, across all languages supported by the MT model.

Speech-to-Text Translation Translation

Efficient Speech Translation with Dynamic Latent Perceivers

1 code implementation28 Oct 2022 Ioannis Tsiamas, Gerard I. Gállego, José A. R. Fonollosa, Marta R. Costa-jussà

Transformers have been the dominant architecture for Speech Translation in recent years, achieving significant improvements in translation quality.

Speech-to-Text Translation Translation

SHAS: Approaching optimal Segmentation for End-to-End Speech Translation

2 code implementations9 Feb 2022 Ioannis Tsiamas, Gerard I. Gállego, José A. R. Fonollosa, Marta R. Costa-jussà

Speech translation datasets provide manual segmentations of the audios, which are not available in real-world scenarios, and existing segmentation methods usually significantly reduce translation quality at inference time.

Segmentation Speech-to-Text Translation +1

End-to-End Speech Translation with Pre-trained Models and Adapters: UPC at IWSLT 2021

1 code implementation ACL (IWSLT) 2021 Gerard I. Gállego, Ioannis Tsiamas, Carlos Escolano, José A. R. Fonollosa, Marta R. Costa-jussà

Our submission also uses a custom segmentation algorithm that employs pre-trained Wav2Vec 2. 0 for identifying periods of untranscribable text and can bring improvements of 2. 5 to 3 BLEU score on the IWSLT 2019 test set, as compared to the result with the given segmentation.

Ranked #2 on Speech-to-Text Translation on MuST-C EN->DE (using extra training data)

Segmentation Speech-to-Text Translation +1

Cannot find the paper you are looking for? You can Submit a new open access paper.