Speech Recognition
1089 papers with code • 316 benchmarks • 87 datasets
Speech Recognition is the task of converting spoken language into text. It involves recognizing the words spoken in an audio recording and transcribing them into a written format. The goal is to accurately transcribe the speech in real-time or from recorded audio, taking into account factors such as accents, speaking speed, and background noise.
( Image credit: SpecAugment )
Libraries
Use these libraries to find Speech Recognition models and implementationsDatasets
Subtasks
Latest papers
Teaching a Multilingual Large Language Model to Understand Multilingual Speech via Multi-Instructional Training
Our zero-shot evaluation results confirm the robustness of our approach across multiple tasks, including speech translation and multilingual spoken language understanding, thereby opening new avenues for applying LLMs in the speech domain.
VietMed: A Dataset and Benchmark for Automatic Speech Recognition of Vietnamese in the Medical Domain
VietMed is also by far the largest public Vietnamese speech dataset in terms of total duration.
CMULAB: An Open-Source Framework for Training and Deployment of Natural Language Processing Models
Effectively using Natural Language Processing (NLP) tools in under-resourced languages requires a thorough understanding of the language itself, familiarity with the latest models and training methodologies, and technical expertise to deploy these models.
BRAVEn: Improving Self-Supervised Pre-training for Visual and Auditory Speech Recognition
In this work, we propose BRAVEn, an extension to the recent RAVEn method, which learns speech representations entirely from raw audio-visual data.
Kallaama: A Transcribed Speech Dataset about Agriculture in the Three Most Widely Spoken Languages in Senegal
To build such technologies, we provide textual corpora in Wolof and Pulaar, and a pronunciation lexicon containing 49, 132 entries from the Wolof dataset.
FlowerFormer: Empowering Neural Architecture Encoding using a Flow-aware Graph Transformer
The success of a specific neural network architecture is closely tied to the dataset and task it tackles; there is no one-size-fits-all solution.
SpokeN-100: A Cross-Lingual Benchmarking Dataset for The Classification of Spoken Numbers in Different Languages
Benchmarking plays a pivotal role in assessing and enhancing the performance of compact deep learning models designed for execution on resource-constrained devices, such as microcontrollers.
SpeechColab Leaderboard: An Open-Source Platform for Automatic Speech Recognition Evaluation
In this paper we introduce the SpeechColab Leaderboard, a general-purpose, open-source platform designed for ASR evaluation.
Real-Time Multimodal Cognitive Assistant for Emergency Medical Services
Emergency Medical Services (EMS) responders often operate under time-sensitive conditions, facing cognitive overload and inherent risks, requiring essential skills in critical thinking and rapid decision-making.
A Study of Dropout-Induced Modality Bias on Robustness to Missing Video Frames for Audio-Visual Speech Recognition
In this paper, we investigate this contrasting phenomenon from the perspective of modality bias and reveal that an excessive modality bias on the audio caused by dropout is the underlying reason.