The ASR-GLUE benchmark is a collection of 6 different NLU (Natural Language Understanding) tasks for evaluating the performance of models under automatic speech recognition (ASR) error across 3 different levels of background noise and 6 speakers with various voice characteristics.

Papers


Paper Code Results Date Stars

Dataset Loaders


No data loaders found. You can submit your data loader here.

Tasks


Similar Datasets


License


  • Unknown

Modalities


Languages