Audioset is an audio event dataset, which consists of over 2M human-annotated 10-second video clips. These clips are collected from YouTube, therefore many of which are in poor-quality and contain multiple sound-sources. A hierarchical ontology of 632 event classes is employed to annotate these data, which means that the same sound could be annotated as different labels. For example, the sound of barking is annotated as Animal, Pets, and Dog. All the videos are split into Evaluation/Balanced-Train/Unbalanced-Train set.
588 PAPERS • 6 BENCHMARKS
AudioCaps is a dataset of sounds with event descriptions that was introduced for the task of audio captioning, with sounds sourced from the AudioSet dataset. Annotators were provided the audio tracks together with category hints (and with additional video hints if needed).
170 PAPERS • 9 BENCHMARKS
A synthetic sound mixture specification dataset for the Target Sound Extraction (TSE) task. Dataset samples consist of a .jams file specifying the mixture components, and a metadata file with target labels. Mixtures are 6 seconds long and contain 3-5 unique foreground sounds over a 6 second long background sound. Each sample is provided with 3 target labels, and sounds corresponding to all target labels are guaranteed to be present in the mixture. FSDKaggle2018 is used as the source for foreground sounds and TAU Urban Acoustic Scenes 2019 is used as the source for background sounds.
1 PAPER • 2 BENCHMARKS