Qwen-Audio: Advancing Universal Audio Understanding via Unified Large-Scale Audio-Language Models
Recently, instruction-following audio-language models have received broad attention for audio interaction with humans. However, the absence of pre-trained audio models capable of handling diverse audio types and tasks has hindered progress in this field. Consequently, most existing works have only been able to support a limited range of interaction capabilities. In this paper, we develop the Qwen-Audio model and address this limitation by scaling up audio-language pre-training to cover over 30 tasks and various audio types, such as human speech, natural sounds, music, and songs, to facilitate universal audio understanding abilities. However, directly co-training all tasks and datasets can lead to interference issues, as the textual labels associated with different datasets exhibit considerable variations due to differences in task focus, language, granularity of annotation, and text structure. To overcome the one-to-many interference, we carefully design a multi-task training framework by conditioning on a sequence of hierarchical tags to the decoder for encouraging knowledge sharing and avoiding interference through shared and specified tags respectively. Remarkably, Qwen-Audio achieves impressive performance across diverse benchmark tasks without requiring any task-specific fine-tuning, surpassing its counterparts. Building upon the capabilities of Qwen-Audio, we further develop Qwen-Audio-Chat, which allows for input from various audios and text inputs, enabling multi-turn dialogues and supporting various audio-central scenarios.
PDF AbstractCode
Results from the Paper
Ranked #1 on Acoustic Scene Classification on TUT Acoustic Scenes 2017 (using extra training data)
Task | Dataset | Model | Metric Name | Metric Value | Global Rank | Uses Extra Training Data |
Benchmark |
---|---|---|---|---|---|---|---|
Speech Recognition | AISHELL-1 | Qwen-Audio | Word Error Rate (WER) | 1.29 | # 1 | ||
Speech Recognition | AISHELL-2 Test Android | Qwen-Audio | Word Error Rate (WER) | 3.3 | # 1 | ||
Speech Recognition | AISHELL-2 Test IOS | Qwen-Audio | Word Error Rate (WER) | 3.1 | # 1 | ||
Speech Recognition | AISHELL-2 Test Mic | Qwen-Audio | Word Error Rate (WER) | 3.3 | # 1 | ||
Audio captioning | Clotho | Qwen-Audio | CIDEr | 0.441 | # 4 | ||
SPIDEr | 0.288 | # 4 | |||||
SPICE | 0.136 | # 2 | |||||
Acoustic Scene Classification | CochlScene | Qwen-Audio | 1:1 Accuracy | 0.795 | # 2 | ||
Speech Recognition | LibriSpeech test-clean | Qwen-Audio | Word Error Rate (WER) | 2.0 | # 16 | ||
Speech Recognition | LibriSpeech test-other | Qwen-Audio | Word Error Rate (WER) | 4.2 | # 18 | ||
Emotion Recognition in Conversation | MELD | Qwen-Audio | Accuracy | 55.70 | # 19 | ||
Acoustic Scene Classification | TUT Acoustic Scenes 2017 | Qwen-Audio | 1:1 Accuracy | 0.649 | # 1 | ||
Audio Classification | VocalSound | Qwen-Audio | Accuracy | 92.89 | # 2 |