no code implementations • 12 May 2022 • Xinhao Mei, Xubo Liu, Mark D. Plumbley, Wenwu Wang
In this paper, we present a comprehensive review of the published contributions in automated audio captioning, from a variety of existing approaches to evaluation metrics and datasets.
1 code implementation • 29 Mar 2022 • Xinhao Mei, Xubo Liu, Jianyuan Sun, Mark D. Plumbley, Wenwu Wang
We present an extensive evaluation of popular metric learning objectives on the AudioCaps and Clotho datasets.
1 code implementation • 28 Mar 2022 • Xubo Liu, Haohe Liu, Qiuqiang Kong, Xinhao Mei, Jinzheng Zhao, Qiushi Huang, Mark D. Plumbley, Wenwu Wang
In this paper, we introduce the task of language-queried audio source separation (LASS), which aims to separate a target source from an audio mixture based on a natural language query of the target source (e. g., "a man tells a joke followed by people laughing").
no code implementations • 7 Mar 2022 • Jianyuan Sun, Xubo Liu, Xinhao Mei, Jinzheng Zhao, Mark D. Plumbley, Volkan Kılıç, Wenwu Wang
In this paper, we propose a novel approach for ASC using deep neural decision forest (DNDF).
no code implementations • 6 Mar 2022 • Xubo Liu, Xinhao Mei, Qiushi Huang, Jianyuan Sun, Jinzheng Zhao, Haohe Liu, Mark D. Plumbley, Volkan Kılıç, Wenwu Wang
BERT is a pre-trained language model that has been extensively used in Natural Language Processing (NLP) tasks.
no code implementations • 10 Jan 2022 • Feiyang Xiao, Jian Guan, Qiaoxi Zhu, Haiyan Lan, Wenwu Wang
Automated audio captioning (AAC) aims to describe audio data with captions using natural language.
no code implementations • 13 Oct 2021 • Ting Liu, Wenwu Wang, Xiaofei Zhang, Zhenyin Gong, Yina Guo
Single channel blind source separation (SCBSS) refers to separate multiple sources from a mixed signal collected by a single sensor.
no code implementations • 13 Oct 2021 • Xinhao Mei, Xubo Liu, Jianyuan Sun, Mark D. Plumbley, Wenwu Wang
As different people may describe an audio clip from different aspects using distinct words and grammars, we argue that an audio captioning system should have the ability to generate diverse captions for a fixed audio clip and across similar audio clips.
no code implementations • 13 Oct 2021 • Yina Guo, Xiaofei Zhang, Zhenying Gong, Anhong Wang, Wenwu Wang
A potential approach to this problem is to design an end-to-end method by using a dual generative adversarial network (DualGAN) without dimension reduction of passing information, but it cannot realize one-to-one signal-to-signal translation (see Fig. 1 (a) and (b)).
2 code implementations • 19 Sep 2021 • Turab Iqbal, Yin Cao, Andrew Bailey, Mark D. Plumbley, Wenwu Wang
We show that the majority of labelling errors in ARCA23K are due to out-of-vocabulary audio clips, and we refer to this type of label noise as open-set label noise.
1 code implementation • 5 Aug 2021 • Xinhao Mei, Qiushi Huang, Xubo Liu, Gengyun Chen, Jingqian Wu, Yusong Wu, Jinzheng Zhao, Shengchen Li, Tom Ko, H Lilian Tang, Xi Shao, Mark D. Plumbley, Wenwu Wang
Automated audio captioning aims to use natural language to describe the content of audio data.
1 code implementation • 21 Jul 2021 • Xinhao Mei, Xubo Liu, Qiushi Huang, Mark D. Plumbley, Wenwu Wang
In this paper, we propose an Audio Captioning Transformer (ACT), which is a full Transformer network based on an encoder-decoder architecture and is totally convolution-free.
Ranked #2 on
Audio captioning
on AudioCaps
2 code implementations • 21 Jul 2021 • Xubo Liu, Qiushi Huang, Xinhao Mei, Tom Ko, H Lilian Tang, Mark D. Plumbley, Wenwu Wang
Automated Audio captioning (AAC) is a cross-modal translation task that aims to use natural language to describe the content of an audio clip.
1 code implementation • 21 Jul 2021 • Xubo Liu, Turab Iqbal, Jinzheng Zhao, Qiushi Huang, Mark D. Plumbley, Wenwu Wang
We evaluate our approach on the UrbanSound8K dataset, compared to SampleRNN, with the performance metrics measuring the quality and diversity of generated sounds.
no code implementations • 31 Mar 2021 • Jian Guan, Wenbo Wang, Pengming Feng, Xinxin Wang, Wenwu Wang
However, the high-dimensional embedding obtained via 1-D convolution and positional encoding can lead to the loss of the signal's own temporal information and a large amount of training parameters.
no code implementations • 31 Mar 2021 • Helin Wang, Yuexian Zou, Wenwu Wang
In this paper, we present SpecAugment++, a novel data augmentation method for deep neural networks based acoustic scene classification (ASC).
1 code implementation • 30 Mar 2021 • Feiyang Xiao, Jian Guan, Qiuqiang Kong, Wenwu Wang
Speech enhancement aims to obtain speech signals with high intelligibility and quality from noisy speech.
no code implementations • 9 Feb 2021 • Turab Iqbal, Karim Helwani, Arvindh Krishnaswamy, Wenwu Wang
For tasks such as classification, there is a good case for learning representations of the data that are invariant to such transformations, yet this is not explicitly enforced by classification losses such as the cross-entropy loss.
2 code implementations • 25 Oct 2020 • Yin Cao, Turab Iqbal, Qiuqiang Kong, Fengyan An, Wenwu Wang, Mark D. Plumbley
Polyphonic sound event localization and detection (SELD), which jointly performs sound event detection (SED) and direction-of-arrival (DoA) estimation, detects the type and occurrence time of sound events as well as their corresponding DoA angles simultaneously.
Sound Audio and Speech Processing
2 code implementations • 30 Sep 2020 • Yin Cao, Turab Iqbal, Qiuqiang Kong, Yue Zhong, Wenwu Wang, Mark D. Plumbley
In this paper, a novel event-independent network for polyphonic sound event localization and detection is proposed.
Audio and Speech Processing Sound
no code implementations • 3 Aug 2020 • Weitao Yuan, Bofei Dong, Shengbei Wang, Masashi Unoki, Wenwu Wang
Monaural Singing Voice Separation (MSVS) is a challenging task and has been studied for decades.
1 code implementation • 11 Feb 2020 • Turab Iqbal, Yin Cao, Qiuqiang Kong, Mark D. Plumbley, Wenwu Wang
The proposed method uses an auxiliary classifier, trained on data that is known to be in-distribution, for detection and relabelling.
no code implementations • 14 Dec 2019 • Helin Wang, Yuexian Zou, Dading Chong, Wenwu Wang
Convolutional neural networks (CNN) are one of the best-performing neural network architectures for environmental sound classification (ESC).
no code implementations • 2 Dec 2019 • Youtian Lin, Pengming Feng, Jian Guan, Wenwu Wang, Jonathon Chambers
First, a novel geometric transformation is employed to better represent the oriented object in angle prediction, then a branch interactive module with a self-attention mechanism is developed to fuse features from classification and box regression branches.
1 code implementation • 14 Jun 2019 • Qiuqiang Kong, Yong Xu, Wenwu Wang, Philip J. B. Jackson, Mark D. Plumbley
Single-channel signal separation and deconvolution aims to separate and deconvolve individual sources from a single-channel mixture and is a challenging problem in which no prior knowledge of the mixing filters is available.
1 code implementation • 1 May 2019 • Yin Cao, Qiuqiang Kong, Turab Iqbal, Fengyan An, Wenwu Wang, Mark D. Plumbley
In this paper, it is experimentally shown that the training information of SED is able to contribute to the direction of arrival estimation (DOAE).
Sound Audio and Speech Processing
1 code implementation • 26 Sep 2018 • Viet Hung Tran, Wenwu Wang
We then use Bayesian method to, for the first time, compute the MAP estimate for the number of sources in PCA and MUSIC algorithms.
2 code implementations • 12 Apr 2018 • Qiuqiang Kong, Yong Xu, Iwona Sobieraj, Wenwu Wang, Mark D. Plumbley
Sound event detection (SED) aims to detect when and recognize what sound events happen in an audio clip.
Sound Audio and Speech Processing
2 code implementations • 8 Nov 2017 • Qiuqiang Kong, Yong Xu, Wenwu Wang, Mark D. Plumbley
First, we propose a separation mapping from the time-frequency (T-F) representation of an audio to the T-F segmentation masks of the audio events.
Sound Audio and Speech Processing
5 code implementations • 2 Nov 2017 • Qiuqiang Kong, Yong Xu, Wenwu Wang, Mark D. Plumbley
Then the classification of a bag is the expectation of the classification output of the instances in the bag with respect to the learned probability measure.
Sound Audio and Speech Processing
3 code implementations • 1 Oct 2017 • Yong Xu, Qiuqiang Kong, Wenwu Wang, Mark D. Plumbley
In this paper, we present a gated convolutional neural network and a temporal attention-based localization method for audio classification, which won the 1st place in the large-scale weakly supervised sound event detection task of Detection and Classification of Acoustic Scenes and Events (DCASE) 2017 challenge.
Sound Audio and Speech Processing
1 code implementation • 17 Mar 2017 • Yong Xu, Qiuqiang Kong, Qiang Huang, Wenwu Wang, Mark D. Plumbley
Audio tagging aims to perform multi-label classification on audio chunks and it is a newly proposed task in the Detection and Classification of Acoustic Scenes and Events 2016 (DCASE 2016) challenge.
Sound
2 code implementations • 24 Feb 2017 • Yong Xu, Qiuqiang Kong, Qiang Huang, Wenwu Wang, Mark D. Plumbley
In this paper, we propose to use a convolutional neural network (CNN) to extract robust features from mel-filter banks (MFBs), spectrograms or even raw waveforms for audio tagging.
1 code implementation • 6 Oct 2016 • Qiuqiang Kong, Yong Xu, Wenwu Wang, Mark Plumbley
The labeling of an audio clip is often based on the audio events in the clip and no event level label is provided to the user.
Sound
no code implementations • 13 Jul 2016 • Yong Xu, Qiang Huang, Wenwu Wang, Mark D. Plumbley
In this paper, we present a deep neural network (DNN)-based acoustic scene classification framework.
1 code implementation • 13 Jul 2016 • Yong Xu, Qiang Huang, Wenwu Wang, Peter Foster, Siddharth Sigtia, Philip J. B. Jackson, Mark D. Plumbley
For the unsupervised feature learning, we propose to use a symmetric or asymmetric deep de-noising auto-encoder (sDAE or aDAE) to generate new data-driven features from the Mel-Filter Banks (MFBs) features.
no code implementations • 24 Jun 2016 • Yong Xu, Qiang Huang, Wenwu Wang, Philip J. B. Jackson, Mark D. Plumbley
Compared with the conventional Gaussian Mixture Model (GMM) and support vector machine (SVM) methods, the proposed fully DNN-based method could well utilize the long-term temporal information with the whole chunk as the input.