AudioBench: A Universal Benchmark for Audio Large Language Models

23 Jun 2024  ·  Bin Wang, Xunlong Zou, Geyu Lin, Shuo Sun, Zhuohan Liu, Wenyu Zhang, Zhengyuan Liu, AiTi Aw, Nancy F. Chen ·

We introduce AudioBench, a universal benchmark designed to evaluate Audio Large Language Models (AudioLLMs). It encompasses 8 distinct tasks and 26 datasets, among which, 7 are newly proposed datasets. The evaluation targets three main aspects: speech understanding, audio scene understanding, and voice understanding (paralinguistic). Despite recent advancements, there lacks a comprehensive benchmark for AudioLLMs on instruction following capabilities conditioned on audio signals. AudioBench addresses this gap by setting up datasets as well as desired evaluation metrics. Besides, we also evaluated the capabilities of five popular models and found that no single model excels consistently across all tasks. We outline the research outlook for AudioLLMs and anticipate that our open-sourced evaluation toolkit, data, and leaderboard will offer a robust testbed for future model developments.

PDF Abstract
Task Dataset Model Metric Name Metric Value Global Rank Result Benchmark
Audio Scene Understanding Clotho-AQA AudioLLMs SALMONN M.J. 51.18 # 1
Audio Scene Understanding Clotho-AQA Whisper+Llama3 Whisper+Llama3 M.J. 29.47 # 4
Audio Scene Understanding Clotho-AQA AudioLLMs WavLLM M.J. 43.01 # 3
Audio Scene Understanding Clotho-AQA AudioLLMs M.J. 50.92 # 2

Methods


No methods listed for this paper. Add relevant methods here