Search Results for author: Bolaji Yusuf

Found 7 papers, 0 papers with code

End-to-End Open Vocabulary Keyword Search With Multilingual Neural Representations

no code implementations15 Aug 2023 Bolaji Yusuf, Jan Cernocky, Murat Saraclar

Conventional keyword search systems operate on automatic speech recognition (ASR) outputs, which causes them to have a complex indexing and search pipeline.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

On-the-fly Text Retrieval for End-to-End ASR Adaptation

no code implementations20 Mar 2023 Bolaji Yusuf, Aditya Gourav, Ankur Gandhe, Ivan Bulyko

End-to-end speech recognition models are improved by incorporating external text sources, typically by fusion with an external language model.

Language Modelling Question Answering +4

USTED: Improving ASR with a Unified Speech and Text Encoder-Decoder

no code implementations12 Feb 2022 Bolaji Yusuf, Ankur Gandhe, Alex Sokolov

There has been a recent focus on training E2E ASR models that get the performance benefits of external text data without incurring the extra cost of evaluating an external language model at inference time.

Language Modelling Machine Translation +2

End-to-End Open Vocabulary Keyword Search

no code implementations23 Aug 2021 Bolaji Yusuf, Alican Gok, Batuhan Gundogdu, Murat Saraclar

Recently, neural approaches to spoken content retrieval have become popular.

Retrieval

Unsupervised Word Segmentation from Discrete Speech Units in Low-Resource Settings

no code implementations SIGUL (LREC) 2022 Marcely Zanon Boito, Bolaji Yusuf, Lucas Ondel, Aline Villavicencio, Laurent Besacier

Our results suggest that neural models for speech discretization are difficult to exploit in our setting, and that it might be necessary to adapt them to limit sequence length.

A Hierarchical Subspace Model for Language-Attuned Acoustic Unit Discovery

no code implementations4 Nov 2020 Bolaji Yusuf, Lucas Ondel, Lukas Burget, Jan Cernocky, Murat Saraclar

In the target language, we infer both the language and unit embeddings in an unsupervised manner, and in so doing, we simultaneously learn a subspace of units specific to that language and the units that dwell on it.

Acoustic Unit Discovery Clustering

Bayesian Subspace HMM for the Zerospeech 2020 Challenge

no code implementations19 May 2020 Bolaji Yusuf, Lucas Ondel

In this paper we describe our submission to the Zerospeech 2020 challenge, where the participants are required to discover latent representations from unannotated speech, and to use those representations to perform speech synthesis, with synthesis quality used as a proxy metric for the unit quality.

Speech Synthesis

Cannot find the paper you are looking for? You can Submit a new open access paper.