SPGISpeech: 5,000 hours of transcribed financial audio for fully formatted end-to-end speech recognition

In the English speech-to-text (STT) machine learning task, acoustic models are conventionally trained on uncased Latin characters, and any necessary orthography (such as capitalization, punctuation, and denormalization of non-standard words) is imputed by separate post-processing models. This adds complexity and limits performance, as many formatting tasks benefit from semantic information present in the acoustic signal but absent in transcription. Here we propose a new STT task: end-to-end neural transcription with fully formatted text for target labels. We present baseline Conformer-based models trained on a corpus of 5,000 hours of professionally transcribed earnings calls, achieving a CER of 1.7. As a contribution to the STT research community, we release the corpus free for non-commercial use at https://datasets.kensho.com/datasets/scribe.

PDF Abstract

Datasets


Introduced in the Paper:

SPGISpeech

Used in the Paper:

LibriSpeech Common Voice

Results from the Paper


Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Speech Recognition SPGISpeech Conformer Word Error Rate (WER) 5.7 # 3

Methods


No methods listed for this paper. Add relevant methods here