Audio captioning
24 papers with code • 2 benchmarks • 2 datasets
Libraries
Use these libraries to find Audio captioning models and implementationsMost implemented papers
Clotho: An Audio Captioning Dataset
Audio captioning is the novel task of general audio content description using free text.
CL4AC: A Contrastive Loss for Audio Captioning
Automated Audio captioning (AAC) is a cross-modal translation task that aims to use natural language to describe the content of an audio clip.
Audio Caption in a Car Setting with a Sentence-Level Loss
Captioning has attracted much attention in image and video understanding while a small amount of work examines audio captioning.
Temporal Sub-sampling of Audio Feature Sequences for Automated Audio Captioning
In this work we present an approach that focuses on explicitly taking advantage of this difference of lengths between sequences, by applying a temporal sub-sampling to the audio input sequence.
Multi-task Regularization Based on Infrequent Classes for Audio Captioning
Audio captioning is a multi-modal task, focusing on using natural language for describing the contents of general audio.
WaveTransformer: A Novel Architecture for Audio Captioning Based on Learning Temporal and Time-Frequency Information
Automated audio captioning (AAC) is a novel task, where a method takes as an input an audio sample and outputs a textual description (i. e. a caption) of its contents.
MusCaps: Generating Captions for Music Audio
Content-based music information retrieval has seen rapid progress with the adoption of deep learning.
THE SJTU SYSTEM FOR DCASE2021 CHALLENGE TASK 6: AUDIO CAPTIONING BASED ON ENCODER PRE-TRAINING AND REINFORCEMENT LEARNING
This report proposes an audio captioning system for the Detection and Classification of Acoustic Scenes and Events (DCASE) 2021 challenge task Task 6.
Continual Learning for Automated Audio Captioning Using The Learning Without Forgetting Approach
In our scenario, a pre-optimized AAC method is used for some unseen general audio signals and can update its parameters in order to adapt to the new information, given a new reference caption.
Audio Captioning Transformer
In this paper, we propose an Audio Captioning Transformer (ACT), which is a full Transformer network based on an encoder-decoder architecture and is totally convolution-free.