Search Results for author: Eng Siong Chng

Found 51 papers, 19 papers with code

Metric-oriented Speech Enhancement using Diffusion Probabilistic Model

no code implementations23 Feb 2023 Chen Chen, Yuchen Hu, Weiwei Weng, Eng Siong Chng

Deep neural network based speech enhancement technique focuses on learning a noisy-to-clean transformation supervised by paired training data.

Speech Enhancement

Unsupervised Noise adaptation using Data Simulation

no code implementations23 Feb 2023 Chen Chen, Yuchen Hu, Heqing Zou, Linhui Sun, Eng Siong Chng

Deep neural network based speech enhancement approaches aim to learn a noisy-to-clean transformation using a supervised learning paradigm.

Domain Adaptation Speech Enhancement

Unifying Speech Enhancement and Separation with Gradient Modulation for End-to-End Noise-Robust Speech Separation

1 code implementation22 Feb 2023 Yuchen Hu, Chen Chen, Heqing Zou, Xionghu Zhong, Eng Siong Chng

To alleviate this problem, we propose a novel network to unify speech enhancement and separation with gradient modulation to improve noise-robustness.

Multi-Task Learning Speech Enhancement +1

Gradient Remedy for Multi-Task Learning in End-to-End Noise-Robust Speech Recognition

1 code implementation22 Feb 2023 Yuchen Hu, Chen Chen, Ruizhe Li, Qiushi Zhu, Eng Siong Chng

In this paper, we propose a simple yet effective approach called gradient remedy (GR) to solve interference between task gradients in noise-robust speech recognition, from perspectives of both angle and magnitude.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

Probabilistic Back-ends for Online Speaker Recognition and Clustering

1 code implementation19 Feb 2023 Alexey Sholokhov, Nikita Kuzmin, Kong Aik Lee, Eng Siong Chng

This paper focuses on multi-enrollment speaker recognition which naturally occurs in the task of online speaker clustering, and studies the properties of different scoring back-ends in this scenario.

Online Clustering Speaker Recognition

Improving Spoken Language Identification with Map-Mix

1 code implementation16 Feb 2023 Shangeth Rajaa, Kriti Anandan, Swaraj Dalmia, Tarun Gupta, Eng Siong Chng

The pre-trained multi-lingual XLSR model generalizes well for language identification after fine-tuning on unseen languages.

Data Augmentation Language Identification +1

Speech-text based multi-modal training with bidirectional attention for improved speech recognition

1 code implementation1 Nov 2022 Yuhang Yang, HaiHua Xu, Hao Huang, Eng Siong Chng, Sheng Li

To let the state-of-the-art end-to-end ASR model enjoy data efficiency, as well as much more unpaired text data by multi-modal training, one needs to address two problems: 1) the synchronicity of feature sampling rates between speech and language (aka text data); 2) the homogeneity of the learned representations from two encoders.

speech-recognition Speech Recognition

Amino Acid Classification in 2D NMR Spectra via Acoustic Signal Embeddings

no code implementations1 Aug 2022 Jia Qi Yip, Dianwen Ng, Bin Ma, Konstantin Pervushin, Eng Siong Chng

Nuclear Magnetic Resonance (NMR) is used in structural biology to experimentally determine the structure of proteins, which is used in many areas of biology and is an important part of drug development.

Speaker Verification

Continual Learning For On-Device Environmental Sound Classification

1 code implementation15 Jul 2022 Yang Xiao, Xubo Liu, James King, Arshdeep Singh, Eng Siong Chng, Mark D. Plumbley, Wenwu Wang

Experimental results on the DCASE 2019 Task 1 and ESC-50 dataset show that our proposed method outperforms baseline continual learning methods on classification accuracy and computational efficiency, indicating our method can efficiently and incrementally learn new classes without the catastrophic forgetting problem for on-device environmental sound classification.

Classification Continual Learning +1

Internal Language Model Estimation based Language Model Fusion for Cross-Domain Code-Switching Speech Recognition

no code implementations9 Jul 2022 Yizhou Peng, Yufei Liu, Jicheng Zhang, HaiHua Xu, Yi He, Hao Huang, Eng Siong Chng

More importantly, we train an end-to-end (E2E) speech recognition model by means of merging two monolingual data sets and observe the efficacy of the proposed ILME-based LM fusion for CSSR.

Language Modelling speech-recognition +1

Intermediate-layer output Regularization for Attention-based Speech Recognition with Shared Decoder

no code implementations9 Jul 2022 Jicheng Zhang, Yizhou Peng, HaiHua Xu, Yi He, Eng Siong Chng, Hao Huang

Intermediate layer output (ILO) regularization by means of multitask training on encoder side has been shown to be an effective approach to yielding improved results on a wide range of end-to-end ASR frameworks.

speech-recognition Speech Recognition

Language-Based Audio Retrieval with Converging Tied Layers and Contrastive Loss

no code implementations29 Jun 2022 Andrew Koh, Eng Siong Chng

In this paper, we tackle the new Language-Based Audio Retrieval task proposed in DCASE 2022.

Retrieval

Self-critical Sequence Training for Automatic Speech Recognition

no code implementations13 Apr 2022 Chen Chen, Yuchen Hu, Nana Hou, Xiaofeng Qi, Heqing Zou, Eng Siong Chng

Although automatic speech recognition (ASR) task has gained remarkable success by sequence-to-sequence models, there are two main mismatches between its training and testing that might lead to performance degradation: 1) The typically used cross-entropy criterion aims to maximize log-likelihood of the training data, while the performance is evaluated by word error rate (WER), not log-likelihood; 2) The teacher-forcing method leads to the dependence on ground truth during training, which means that model has never been exposed to its own prediction before testing.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Rainbow Keywords: Efficient Incremental Learning for Online Spoken Keyword Spotting

1 code implementation30 Mar 2022 Yang Xiao, Nana Hou, Eng Siong Chng

Catastrophic forgetting is a thorny challenge when updating keyword spotting (KWS) models after deployment.

Data Augmentation Incremental Learning +3

Noise-robust Speech Recognition with 10 Minutes Unparalleled In-domain Data

no code implementations29 Mar 2022 Chen Chen, Nana Hou, Yuchen Hu, Shashank Shirol, Eng Siong Chng

Noise-robust speech recognition systems require large amounts of training data including noisy speech data and corresponding transcripts to achieve state-of-the-art performances in face of various practical environments.

Robust Speech Recognition speech-recognition

Interactive Audio-text Representation for Automated Audio Captioning with Contrastive Learning

no code implementations29 Mar 2022 Chen Chen, Nana Hou, Yuchen Hu, Heqing Zou, Xiaofeng Qi, Eng Siong Chng

Automated Audio captioning (AAC) is a cross-modal task that generates natural language to describe the content of input audio.

Audio captioning Contrastive Learning

Speech Emotion Recognition with Co-Attention based Multi-level Acoustic Information

1 code implementation29 Mar 2022 Heqing Zou, Yuke Si, Chen Chen, Deepu Rajan, Eng Siong Chng

In this paper, we propose an end-to-end speech emotion recognition system using multi-level acoustic information with a newly designed co-attention module.

Speech Emotion Recognition

Dual-Path Style Learning for End-to-End Noise-Robust Speech Recognition

1 code implementation28 Mar 2022 Yuchen Hu, Nana Hou, Chen Chen, Eng Siong Chng

To alleviate this, we propose a dual-path style learning approach for end-to-end noise-robust automatic speech recognition (DPSL-ASR).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

L-SpEx: Localized Target Speaker Extraction

1 code implementation21 Feb 2022 Meng Ge, Chenglin Xu, Longbiao Wang, Eng Siong Chng, Jianwu Dang, Haizhou Li

Speaker extraction aims to extract the target speaker's voice from a multi-talker speech mixture given an auxiliary reference utterance.

Target Speaker Extraction

A Unified Speaker Adaptation Approach for ASR

1 code implementation EMNLP 2021 Yingzhu Zhao, Chongjia Ni, Cheung-Chi Leung, Shafiq Joty, Eng Siong Chng, Bin Ma

For model adaptation, we use a novel gradual pruning method to adapt to target speakers without changing the model architecture, which to the best of our knowledge, has never been explored in ASR.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Interactive Feature Fusion for End-to-End Noise-Robust Speech Recognition

1 code implementation11 Oct 2021 Yuchen Hu, Nana Hou, Chen Chen, Eng Siong Chng

Speech enhancement (SE) aims to suppress the additive noise from a noisy speech signal to improve the speech's perceptual quality and intelligibility.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +3

Minimum word error training for non-autoregressive Transformer-based code-switching ASR

no code implementations7 Oct 2021 Yizhou Peng, Jicheng Zhang, HaiHua Xu, Hao Huang, Eng Siong Chng

Non-autoregressive end-to-end ASR framework might be potentially appropriate for code-switching recognition task thanks to its inherent property that present output token being independent of historical ones.

Automated Audio Captioning using Transfer Learning and Reconstruction Latent Space Similarity Regularization

no code implementations10 Aug 2021 Andrew Koh, Fuzhao Xue, Eng Siong Chng

In this paper, we examine the use of Transfer Learning using Pretrained Audio Neural Networks (PANNs), and propose an architecture that is able to better leverage the acoustic features provided by PANNs for the Automated Audio Captioning Task.

Audio captioning Transfer Learning

E2E-based Multi-task Learning Approach to Joint Speech and Accent Recognition

no code implementations15 Jun 2021 Jicheng Zhang, Yizhou Peng, Pham Van Tung, HaiHua Xu, Hao Huang, Eng Siong Chng

In this paper, we propose a single multi-task learning framework to perform End-to-End (E2E) speech recognition (ASR) and accent recognition (AR) simultaneously.

Multi-Task Learning speech-recognition +1

End-to-End Speaker Height and age estimation using Attention Mechanism with LSTM-RNN

no code implementations13 Jan 2021 Manav Kaushik, Van Tung Pham, Eng Siong Chng

In this work, we propose a novel approach of using attention mechanism to build an end-to-end architecture for height and age estimation.

Age Estimation Multi-Task Learning

An Embarrassingly Simple Model for Dialogue Relation Extraction

1 code implementation27 Dec 2020 Fuzhao Xue, Aixin Sun, Hao Zhang, Jinjie Ni, Eng Siong Chng

Dialogue relation extraction (RE) is to predict the relation type of two entities mentioned in a dialogue.

Dialog Relation Extraction

GDPNet: Refining Latent Multi-View Graph for Relation Extraction

1 code implementation12 Dec 2020 Fuzhao Xue, Aixin Sun, Hao Zhang, Eng Siong Chng

Recent advances on RE task are from BERT-based sequence modeling and graph-based modeling of relationships among the tokens in the sequence.

Ranked #4 on Dialog Relation Extraction on DialogRE (F1c (v1) metric)

Dialog Relation Extraction Dynamic Time Warping

Multilingual Approach to Joint Speech and Accent Recognition with DNN-HMM Framework

no code implementations22 Oct 2020 Yizhou Peng, Jicheng Zhang, Haobo Zhang, HaiHua Xu, Hao Huang, Eng Siong Chng

Experimental results on an 8-accent English speech recognition show both methods can yield WERs close to the conventional ASR systems that completely ignore the accent, as well as desired AR accuracy.

speech-recognition Speech Recognition +1

Approaches to Improving Recognition of Underrepresented Named Entities in Hybrid ASR Systems

no code implementations18 May 2020 Tingzhi Mao, Yerbolat Khassanov, Van Tung Pham, Hai-Hua Xu, Hao Huang, Eng Siong Chng

In this paper, we present a series of complementary approaches to improve the recognition of underrepresented named entities (NE) in hybrid ASR systems without compromising overall word error rate performance.

Language Modelling

SpEx+: A Complete Time Domain Speaker Extraction Network

no code implementations10 May 2020 Meng Ge, Cheng-Lin Xu, Longbiao Wang, Eng Siong Chng, Jianwu Dang, Haizhou Li

To eliminate such mismatch, we propose a complete time-domain speaker extraction solution, that is called SpEx+.

Audio and Speech Processing Sound

Time-domain speaker extraction network

no code implementations29 Apr 2020 Cheng-Lin Xu, Wei Rao, Eng Siong Chng, Haizhou Li

The inaccuracy of phase estimation is inherent to the frequency domain processing, that affects the quality of signal reconstruction.

Audio and Speech Processing Sound

SpEx: Multi-Scale Time Domain Speaker Extraction Network

1 code implementation17 Apr 2020 Cheng-Lin Xu, Wei Rao, Eng Siong Chng, Haizhou Li

Inspired by Conv-TasNet, we propose a time-domain speaker extraction network (SpEx) that converts the mixture speech into multi-scale embedding coefficients instead of decomposing the speech signal into magnitude and phase spectra.

Multi-Task Learning

Enriching Rare Word Representations in Neural Language Models by Embedding Matrix Augmentation

1 code implementation8 Apr 2019 Yerbolat Khassanov, Zhiping Zeng, Van Tung Pham, Hai-Hua Xu, Eng Siong Chng

However, learning the representation of rare words is a challenging problem causing the NLM to produce unreliable probability estimates.

speech-recognition Speech Recognition

On the End-to-End Solution to Mandarin-English Code-switching Speech Recognition

1 code implementation1 Nov 2018 Zhiping Zeng, Yerbolat Khassanov, Van Tung Pham, Hai-Hua Xu, Eng Siong Chng, Haizhou Li

Code-switching (CS) refers to a linguistic phenomenon where a speaker uses different languages in an utterance or between alternating utterances.

Data Augmentation Language Identification +3

Unsupervised and Efficient Vocabulary Expansion for Recurrent Neural Network Language Models in ASR

no code implementations27 Jun 2018 Yerbolat Khassanov, Eng Siong Chng

Additionally, we propose to generate the list of OOS words to expand vocabulary in unsupervised manner by automatically extracting them from ASR output.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Study of Semi-supervised Approaches to Improving English-Mandarin Code-Switching Speech Recognition

no code implementations16 Jun 2018 Pengcheng Guo, Hai-Hua Xu, Lei Xie, Eng Siong Chng

In this paper, we present our overall efforts to improve the performance of a code-switching speech recognition system using semi-supervised training methods from lexicon learning to acoustic modeling, on the South East Asian Mandarin-English (SEAME) data.

speech-recognition Speech Recognition

Spoofing detection under noisy conditions: a preliminary investigation and an initial database

no code implementations9 Feb 2016 Xiaohai Tian, Zhizheng Wu, Xiong Xiao, Eng Siong Chng, Haizhou Li

To simulate the real-life scenarios, we perform a preliminary investigation of spoofing detection under additive noisy conditions, and also describe an initial database for this task.

Speaker Verification

Cannot find the paper you are looking for? You can Submit a new open access paper.