CrowdSpeech and VoxDIY: Benchmark Datasets for Crowdsourced Audio Transcription

2 Jul 2021  ·  Nikita Pavlichenko, Ivan Stelmakh, Dmitry Ustalov ·

Domain-specific data is the crux of the successful transfer of machine learning systems from benchmarks to real life. In simple problems such as image classification, crowdsourcing has become one of the standard tools for cheap and time-efficient data collection: thanks in large part to advances in research on aggregation methods. However, the applicability of crowdsourcing to more complex tasks (e.g., speech recognition) remains limited due to the lack of principled aggregation methods for these modalities. The main obstacle towards designing aggregation methods for more advanced applications is the absence of training data, and in this work, we focus on bridging this gap in speech recognition. For this, we collect and release CrowdSpeech -- the first publicly available large-scale dataset of crowdsourced audio transcriptions. Evaluation of existing and novel aggregation methods on our data shows room for improvement, suggesting that our work may entail the design of better algorithms. At a higher level, we also contribute to the more general challenge of developing the methodology for reliable data collection via crowdsourcing. In that, we design a principled pipeline for constructing datasets of crowdsourced audio transcriptions in any novel domain. We show its applicability on an under-resourced language by constructing VoxDIY -- a counterpart of CrowdSpeech for the Russian language. We also release the code that allows a full replication of our data collection pipeline and share various insights on best practices of data collection via crowdsourcing.

PDF Abstract

Datasets


Introduced in the Paper:

CrowdSpeech
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Crowdsourced Text Aggregation CrowdSpeech test-clean ROVER Word Error Rate (WER) 7.29 # 1
Crowdsourced Text Aggregation CrowdSpeech test-clean HRRASA Word Error Rate (WER) 8.59 # 2
Crowdsourced Text Aggregation CrowdSpeech test-clean RASA Word Error Rate (WER) 8.6 # 3
Crowdsourced Text Aggregation CrowdSpeech test-other HRRASA Word Error Rate (WER) 15.66 # 2
Crowdsourced Text Aggregation CrowdSpeech test-other RASA Word Error Rate (WER) 15.67 # 3
Crowdsourced Text Aggregation CrowdSpeech test-other ROVER Word Error Rate (WER) 13.41 # 1

Methods


No methods listed for this paper. Add relevant methods here