Search Results for author: Ramon Prieto

Found 2 papers, 2 papers with code

State-of-the-Art Speech Recognition Using Multi-Stream Self-Attention With Dilated 1D Convolutions

1 code implementation1 Oct 2019 Kyu J. Han, Ramon Prieto, Kaixing Wu, Tao Ma

Self-attention has been a huge success for many downstream tasks in NLP, which led to exploration of applying self-attention to speech problems as well.

speech-recognition Speech Recognition

Attention-based sequence-to-sequence model for speech recognition: development of state-of-the-art system on LibriSpeech and its application to non-native English

1 code implementation31 Oct 2018 Yan Yin, Ramon Prieto, Bin Wang, Jianwei Zhou, Yiwei Gu, Yang Liu, Hui Lin

Recent research has shown that attention-based sequence-to-sequence models such as Listen, Attend, and Spell (LAS) yield comparable results to state-of-the-art ASR systems on various tasks.

speech-recognition Speech Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.