Search Results for author: Haizhou Li

Found 133 papers, 47 papers with code

TTS-Guided Training for Accent Conversion Without Parallel Data

no code implementations20 Dec 2022 Yi Zhou, Zhizheng Wu, Mingyang Zhang, Xiaohai Tian, Haizhou Li

Specifically, a text-to-speech (TTS) system is first pretrained with target-accented speech data.

PoE: a Panel of Experts for Generalized Automatic Dialogue Assessment

no code implementations18 Dec 2022 Chen Zhang, Luis Fernando D'Haro, Qiquan Zhang, Thomas Friedrichs, Haizhou Li

To tackle the multi-domain dialogue evaluation task, we propose a Panel of Experts (PoE), a multitask network that consists of a shared transformer encoder and a collection of lightweight adapters.

Data Augmentation Dialogue Evaluation +3

Relational Sentence Embedding for Flexible Semantic Matching

1 code implementation17 Dec 2022 Bin Wang, Haizhou Li

We present Relational Sentence Embedding (RSE), a new paradigm to further discover the potential of sentence embeddings.

Semantic Textual Similarity Sentence Embedding +1

Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation

2 code implementations20 Nov 2022 Jiawei Du, Yidi Jiang, Vincent Y. F. Tan, Joey Tianyi Zhou, Haizhou Li

To alleviate the adverse impact of this accumulated trajectory error, we propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.

Neural Architecture Search

Self-Transriber: Few-shot Lyrics Transcription with Self-training

no code implementations18 Nov 2022 Xiaoxue Gao, Xianghu Yue, Haizhou Li

The current lyrics transcription approaches heavily rely on supervised learning with labeled data, but such data are scarce and manual labeling of singing is expensive.

Few-Shot Learning

token2vec: A Joint Self-Supervised Pre-training Framework Using Unpaired Speech and Text

no code implementations30 Oct 2022 Xianghu Yue, Junyi Ao, Xiaoxue Gao, Haizhou Li

Firstly, due to the distinct characteristics between speech and text modalities, where speech is continuous while text is discrete, we first discretize speech into a sequence of discrete speech tokens to solve the modality mismatch problem.

intent-classification Intent Classification +1

Speaker recognition with two-step multi-modal deep cleansing

1 code implementation28 Oct 2022 Ruijie Tao, Kong Aik Lee, Zhan Shi, Haizhou Li

However, noisy samples (i. e., with wrong labels) in the training set induce confusion and cause the network to learn the incorrect representation.

Representation Learning Speaker Recognition

Self-Supervised Training of Speaker Encoder with Multi-Modal Diverse Positive Pairs

no code implementations27 Oct 2022 Ruijie Tao, Kong Aik Lee, Rohan Kumar Das, Ville Hautamäki, Haizhou Li

We study a novel neural architecture and its training strategies of speaker encoder for speaker recognition without using any identity labels.

Contrastive Learning Self-Supervised Learning +1

FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis

1 code implementation27 Oct 2022 Yifan Hu, Rui Liu, Guanglai Gao, Haizhou Li

Therefore, we propose a novel expressive conversational TTS model, termed as FCTalker, that learn the fine and coarse grained context dependency at the same time during speech generation.

Speech Synthesis

Explicit Intensity Control for Accented Text-to-speech

no code implementations27 Oct 2022 Rui Liu, Haolin Zuo, De Hu, Guanglai Gao, Haizhou Li

Accented text-to-speech (TTS) synthesis seeks to generate speech with an accent (L2) as a variant of the standard version (L1).

speech-recognition Speech Recognition

Mixed Emotion Modelling for Emotional Voice Conversion

no code implementations25 Oct 2022 Kun Zhou, Berrak Sisman, Carlos Busso, Haizhou Li

Each attribute measures the degree of the relevance between the speech recordings belonging to different emotion types.

Voice Conversion

FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation

2 code implementations25 Oct 2022 Chen Zhang, Luis Fernando D'Haro, Qiquan Zhang, Thomas Friedrichs, Haizhou Li

Recent model-based reference-free metrics for open-domain dialogue evaluation exhibit promising correlations with human judgment.

Dialogue Evaluation

Analyzing and Evaluating Faithfulness in Dialogue Summarization

1 code implementation21 Oct 2022 Bin Wang, Chen Zhang, Yan Zhang, Yiming Chen, Haizhou Li

The factual correctness of summaries has the highest priority before practical applications.

Text Summarization

Training Spiking Neural Networks with Local Tandem Learning

1 code implementation10 Oct 2022 Qu Yang, Jibin Wu, Malu Zhang, Yansong Chua, Xinchao Wang, Haizhou Li

The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN.

The Kriston AI System for the VoxCeleb Speaker Recognition Challenge 2022

no code implementations23 Sep 2022 Qutang Cai, Guoqiang Hong, Zhijian Ye, Ximin Li, Haizhou Li

This technical report describes our system for track 1, 2 and 4 of the VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22).

Action Detection Activity Detection +1

Controllable Accented Text-to-Speech Synthesis

no code implementations22 Sep 2022 Rui Liu, Berrak Sisman, Guanglai Gao, Haizhou Li

Accented TTS synthesis is challenging as L2 is different from L1 in both in terms of phonetic rendering and prosody pattern.

Speech Synthesis Text-To-Speech Synthesis

Speech Synthesis with Mixed Emotions

no code implementations11 Aug 2022 Kun Zhou, Berrak Sisman, Rajib Rana, B. W. Schuller, Haizhou Li

We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.

Emotional Speech Synthesis

PoLyScriber: Integrated Training of Extractor and Lyrics Transcriber for Polyphonic Music

no code implementations15 Jul 2022 Xiaoxue Gao, Chitralekha Gupta, Haizhou Li

Lyrics transcription of polyphonic music is challenging as the background music affects lyrics intelligibility.

Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning

1 code implementation15 Jun 2022 Rui Liu, Berrak Sisman, Björn Schuller, Guanglai Gao, Haizhou Li

In this paper, we propose a data-driven deep learning model, i. e. StrengthNet, to improve the generalization of emotion strength assessment for seen and unseen speech.

Emotion Classification Multi-Task Learning +1

M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database

1 code implementation ACL 2022 Jinming Zhao, Tenggan Zhang, Jingwen Hu, Yuchen Liu, Qin Jin, Xinchao Wang, Haizhou Li

In this work, we propose a Multi-modal Multi-scene Multi-label Emotional Dialogue dataset, M3ED, which contains 990 dyadic emotional dialogues from 56 different TV series, a total of 9, 082 turns and 24, 449 utterances.

Emotion Recognition

Music-robust Automatic Lyrics Transcription of Polyphonic Music

1 code implementation7 Apr 2022 Xiaoxue Gao, Chitralekha Gupta, Haizhou Li

To improve the robustness of lyrics transcription to the background music, we propose a strategy of combining the features that emphasize the singing vocals, i. e. music-removed features that represent singing vocal extracted features, and the features that capture the singing vocals as well as the background music, i. e. music-present features.

Language Modelling

Genre-conditioned Acoustic Models for Automatic Lyrics Transcription of Polyphonic Music

no code implementations7 Apr 2022 Xiaoxue Gao, Chitralekha Gupta, Haizhou Li

Lyrics transcription of polyphonic music is challenging not only because the singing vocals are corrupted by the background music, but also because the background music and the singing style vary across music genres, such as pop, metal, and hip hop, which affects lyrics intelligibility of the song in different ways.

A Hybrid Continuity Loss to Reduce Over-Suppression for Time-domain Target Speaker Extraction

1 code implementation31 Mar 2022 Zexu Pan, Meng Ge, Haizhou Li

We propose a hybrid continuity loss function for time-domain speaker extraction algorithms to settle the over-suppression problem.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Speaker Extraction with Co-Speech Gestures Cue

1 code implementation31 Mar 2022 Zexu Pan, Xinyuan Qian, Haizhou Li

Speaker extraction seeks to extract the clean speech of a target speaker from a multi-talker mixture speech.

Speech Separation

LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT

1 code implementation29 Mar 2022 Rui Wang, Qibing Bai, Junyi Ao, Long Zhou, Zhixiang Xiong, Zhihua Wei, Yu Zhang, Tom Ko, Haizhou Li

LightHuBERT outperforms the original HuBERT on ASR and five SUPERB tasks with the HuBERT size, achieves comparable performance to the teacher model in most tasks with a reduction of 29% parameters, and obtains a $3. 5\times$ compression ratio in three SUPERB tasks, e. g., automatic speaker verification, keyword spotting, and intent classification, with a slight accuracy loss.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +6

L-SpEx: Localized Target Speaker Extraction

1 code implementation21 Feb 2022 Meng Ge, Chenglin Xu, Longbiao Wang, Eng Siong Chng, Jianwu Dang, Haizhou Li

Speaker extraction aims to extract the target speaker's voice from a multi-talker speech mixture given an auxiliary reference utterance.

Target Speaker Extraction

MFA: TDNN with Multi-scale Frequency-channel Attention for Text-independent Speaker Verification with Short Utterances

no code implementations3 Feb 2022 Tianchi Liu, Rohan Kumar Das, Kong Aik Lee, Haizhou Li

The time delay neural network (TDNN) represents one of the state-of-the-art of neural solutions to text-independent speaker verification.

Text-Independent Speaker Verification

MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation

1 code implementation14 Dec 2021 Chen Zhang, Luis Fernando D'Haro, Thomas Friedrichs, Haizhou Li

Chatbots are designed to carry out human-like conversations across different domains, such as general chit-chat, knowledge exchange, and persona-grounded conversations.

Dialogue Evaluation

HLT-NUS SUBMISSION FOR 2020 NIST Conversational Telephone Speech SRE

2 code implementations12 Nov 2021 Rohan Kumar Das, Ruijie Tao, Haizhou Li

This work provides a brief description of Human Language Technology (HLT) Laboratory, National University of Singapore (NUS) system submission for 2020 NIST conversational telephone speech (CTS) speaker recognition evaluation (SRE).

Domain Adaptation Speaker Recognition

MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition

no code implementations27 Oct 2021 Jinming Zhao, Ruichen Li, Qin Jin, Xinchao Wang, Haizhou Li

Multimodal emotion recognition study is hindered by the lack of labelled corpora in terms of scale and diversity, due to the high annotation cost and label ambiguity.

Emotion Classification Multimodal Emotion Recognition +1

Disentanglement of Emotional Style and Speaker Identity for Expressive Voice Conversion

no code implementations20 Oct 2021 Zongyang Du, Berrak Sisman, Kun Zhou, Haizhou Li

Expressive voice conversion performs identity conversion for emotional speakers by jointly converting speaker identity and emotional style.

Disentanglement Voice Conversion

Ego4D: Around the World in 3,000 Hours of Egocentric Video

3 code implementations CVPR 2022 Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei HUANG, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik

We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite.

De-identification Ethics

DeepA: A Deep Neural Analyzer For Speech And Singing Vocoding

no code implementations13 Oct 2021 Sergey Nikonorov, Berrak Sisman, Mingyang Zhang, Haizhou Li

At the same time, as the deep neural analyzer is learnable, it is expected to be more accurate for signal reconstruction and manipulation, and generalizable from speech to singing.

Speech Synthesis Voice Conversion

VisualTTS: TTS with Accurate Lip-Speech Synchronization for Automatic Voice Over

no code implementations7 Oct 2021 Junchen Lu, Berrak Sisman, Rui Liu, Mingyang Zhang, Haizhou Li

The proposed VisualTTS adopts two novel mechanisms that are 1) textual-visual attention, and 2) visual fusion strategy during acoustic decoding, which both contribute to forming accurate alignment between the input text content and lip motion in input lip sequence.

Speech Synthesis

StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis

1 code implementation7 Oct 2021 Rui Liu, Berrak Sisman, Haizhou Li

The emotion strength of synthesized speech can be controlled flexibly using a strength descriptor, which is obtained by an emotion attribute ranking function.

Data Augmentation Emotional Speech Synthesis +1

Revisiting Self-Training for Few-Shot Learning of Language Model

1 code implementation EMNLP 2021 Yiming Chen, Yan Zhang, Chen Zhang, Grandee Lee, Ran Cheng, Haizhou Li

In this work, we revisit the self-training technique for language model fine-tuning and present a state-of-the-art prompt-based few-shot learner, SFLM.

Benchmarking Few-Shot Learning +4

PL-EESR: Perceptual Loss Based END-TO-END Robust Speaker Representation Extraction

1 code implementation3 Oct 2021 Yi Ma, Kong Aik Lee, Ville Hautamaki, Haizhou Li

Speech enhancement aims to improve the perceptual quality of the speech signal by suppression of the background noise.

Speaker Identification Speaker Verification +1

USEV: Universal Speaker Extraction with Visual Cue

1 code implementation30 Sep 2021 Zexu Pan, Meng Ge, Haizhou Li

The speaker extraction algorithm requires an auxiliary reference, such as a video recording or a pre-recorded speech, to form top-down auditory attention on the target speaker.

Exploring Teacher-Student Learning Approach for Multi-lingual Speech-to-Intent Classification

no code implementations28 Sep 2021 Bidisha Sharma, Maulik Madhavi, Xuehao Zhou, Haizhou Li

In particular, we use synthesized speech generated from an English-Mandarin text corpus for analysis and training of a multi-lingual intent classification model.

Classification intent-classification +1

Knowledge Distillation from BERT Transformer to Speech Transformer for Intent Classification

1 code implementation5 Aug 2021 Yidi Jiang, Bidisha Sharma, Maulik Madhavi, Haizhou Li

In this regard, we leverage the reliable and widely used bidirectional encoder representations from transformers (BERT) model as a language model and transfer the knowledge to build an acoustic model for intent classification using the speech.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +7

Serialized Multi-Layer Multi-Head Attention for Neural Speaker Embedding

no code implementations14 Jul 2021 Hongning Zhu, Kong Aik Lee, Haizhou Li

Instead of utilizing multi-head attention in parallel, the proposed serialized multi-layer multi-head attention is designed to aggregate and propagate attentive statistics from one layer to the next in a serialized manner.

Text-Independent Speaker Verification

Selective Listening by Synchronizing Speech with Lips

1 code implementation14 Jun 2021 Zexu Pan, Ruijie Tao, Chenglin Xu, Haizhou Li

A speaker extraction algorithm seeks to extract the speech of a target speaker from a multi-talker speech mixture when given a cue that represents the target speaker, such as a pre-enrolled speech utterance, or an accompanying video track.

Lip Reading Target Speaker Extraction

Emotional Voice Conversion: Theory, Databases and ESD

1 code implementation31 May 2021 Kun Zhou, Berrak Sisman, Rui Liu, Haizhou Li

In this paper, we first provide a review of the state-of-the-art emotional voice conversion research, and the existing emotional speech databases.

Voice Conversion

The Multi-speaker Multi-style Voice Cloning Challenge 2021

no code implementations5 Apr 2021 Qicong Xie, Xiaohai Tian, Guanghou Liu, Kun Song, Lei Xie, Zhiyong Wu, Hai Li, Song Shi, Haizhou Li, Fen Hong, Hui Bu, Xin Xu

The challenge consists of two tracks, namely few-shot track and one-shot track, where the participants are required to clone multiple target voices with 100 and 5 samples respectively.

Benchmarking Voice Cloning

Limited Data Emotional Voice Conversion Leveraging Text-to-Speech: Two-stage Sequence-to-Sequence Training

2 code implementations31 Mar 2021 Kun Zhou, Berrak Sisman, Haizhou Li

In stage 2, we perform emotion training with a limited amount of emotional speech data, to learn how to disentangle emotional style and linguistic information from the speech.

Voice Conversion

Target Speaker Verification with Selective Auditory Attention for Single and Multi-talker Speech

1 code implementation30 Mar 2021 Chenglin Xu, Wei Rao, Jibin Wu, Haizhou Li

Inspired by the study on target speaker extraction, e. g., SpEx, we propose a unified speaker verification framework for both single- and multi-talker speech, that is able to pay selective auditory attention to the target speaker.

Multi-Task Learning Speaker Verification +1

Leveraging Acoustic and Linguistic Embeddings from Pretrained speech and language Models for Intent Classification

no code implementations15 Feb 2021 Bidisha Sharma, Maulik Madhavi, Haizhou Li

An intent classification system is usually implemented as a pipeline process, with a speech recognition module followed by text processing that classifies the intents.

Classification General Classification +7

VAW-GAN for Disentanglement and Recomposition of Emotional Elements in Speech

no code implementations3 Nov 2020 Kun Zhou, Berrak Sisman, Haizhou Li

Emotional voice conversion (EVC) aims to convert the emotion of speech from one state to another while preserving the linguistic content and speaker identity.

Disentanglement Voice Conversion

Seen and Unseen emotional style transfer for voice conversion with a new emotional speech dataset

2 code implementations28 Oct 2020 Kun Zhou, Berrak Sisman, Rui Liu, Haizhou Li

Emotional voice conversion aims to transform emotional prosody in speech while preserving the linguistic content and speaker identity.

Speech Emotion Recognition Style Transfer +1

GraphSpeech: Syntax-Aware Graph Attention Network For Neural Speech Synthesis

no code implementations23 Oct 2020 Rui Liu, Berrak Sisman, Haizhou Li

Attention-based end-to-end text-to-speech synthesis (TTS) is superior to conventional statistical methods in many ways.

Graph Attention Speech Synthesis +1

Muse: Multi-modal target speaker extraction with visual cues

no code implementations15 Oct 2020 Zexu Pan, Ruijie Tao, Chenglin Xu, Haizhou Li

Speaker extraction algorithm relies on the speech sample from the target speaker as the reference point to focus its attention.

Target Speaker Extraction

Speaker-Utterance Dual Attention for Speaker and Utterance Verification

no code implementations20 Aug 2020 Tianchi Liu, Rohan Kumar Das, Maulik Madhavi, ShengMei Shen, Haizhou Li

The proposed SUDA features an attention mask mechanism to learn the interaction between the speaker and utterance information streams.

Speaker Verification

Modeling Prosodic Phrasing with Multi-Task Learning in Tacotron-based TTS

no code implementations11 Aug 2020 Rui Liu, Berrak Sisman, Feilong Bao, Guanglai Gao, Haizhou Li

We propose a multi-task learning scheme for Tacotron training, that optimizes the system to predict both Mel spectrum and phrase breaks.

Multi-Task Learning Speech Synthesis

Spectrum and Prosody Conversion for Cross-lingual Voice Conversion with CycleGAN

no code implementations11 Aug 2020 Zongyang Du, Kun Zhou, Berrak Sisman, Haizhou Li

It relies on non-parallel training data from two different languages, hence, is more challenging than mono-lingual voice conversion.

Voice Conversion

VAW-GAN for Singing Voice Conversion with Non-parallel Training Data

no code implementations10 Aug 2020 Junchen Lu, Kun Zhou, Berrak Sisman, Haizhou Li

We train an encoder to disentangle singer identity and singing prosody (F0 contour) from phonetic content.

Voice Conversion

Multi-Tones' Phase Coding (MTPC) of Interaural Time Difference by Spiking Neural Network

no code implementations7 Jul 2020 Zihan Pan, Malu Zhang, Jibin Wu, Haizhou Li

Inspired by the mammal's auditory localization pathway, in this paper we propose a pure spiking neural network (SNN) based computational model for precise sound localization in the noisy real-world environment, and implement this algorithm in a real-time robotic system with a microphone array.

Progressive Tandem Learning for Pattern Recognition with Deep Spiking Neural Networks

no code implementations2 Jul 2020 Jibin Wu, Cheng-Lin Xu, Daquan Zhou, Haizhou Li, Kay Chen Tan

In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, which is referred to as progressive tandem learning of deep SNNs.

Image Reconstruction Object Recognition +1

Modeling Code-Switch Languages Using Bilingual Parallel Corpus

no code implementations ACL 2020 Gr Lee, ee, Haizhou Li

A bilingual language model is expected to model the sequential dependency for words across languages, which is difficult due to the inherent lack of suitable training data as well as diverse syntactic structure across languages.

Bilingual Lexicon Induction Language Modelling +1

Converting Anyone's Emotion: Towards Speaker-Independent Emotional Voice Conversion

1 code implementation13 May 2020 Kun Zhou, Berrak Sisman, Mingyang Zhang, Haizhou Li

We consider that there is a common code between speakers for emotional expression in a spoken language, therefore, a speaker-independent mapping between emotional states is possible.

Voice Conversion

SpEx+: A Complete Time Domain Speaker Extraction Network

no code implementations10 May 2020 Meng Ge, Cheng-Lin Xu, Longbiao Wang, Eng Siong Chng, Jianwu Dang, Haizhou Li

To eliminate such mismatch, we propose a complete time-domain speaker extraction solution, that is called SpEx+.

Audio and Speech Processing Sound

Time-domain speaker extraction network

no code implementations29 Apr 2020 Cheng-Lin Xu, Wei Rao, Eng Siong Chng, Haizhou Li

The inaccuracy of phase estimation is inherent to the frequency domain processing, that affects the quality of signal reconstruction.

Audio and Speech Processing Sound

SpEx: Multi-Scale Time Domain Speaker Extraction Network

1 code implementation17 Apr 2020 Cheng-Lin Xu, Wei Rao, Eng Siong Chng, Haizhou Li

Inspired by Conv-TasNet, we propose a time-domain speaker extraction network (SpEx) that converts the mixture speech into multi-scale embedding coefficients instead of decomposing the speech signal into magnitude and phase spectra.

Multi-Task Learning

Rectified Linear Postsynaptic Potential Function for Backpropagation in Deep Spiking Neural Networks

no code implementations26 Mar 2020 Malu Zhang, Jiadong Wang, Burin Amornpaisannon, Zhixuan Zhang, VPK Miriyala, Ammar Belatreche, Hong Qu, Jibin Wu, Yansong Chua, Trevor E. Carlson, Haizhou Li

In STDBP algorithm, the timing of individual spikes is used to convey information (temporal coding), and learning (back-propagation) is performed based on spike timing in an event-driven manner.

Decision Making

WaveTTS: Tacotron-based TTS with Joint Time-Frequency Domain Loss

no code implementations2 Feb 2020 Rui Liu, Berrak Sisman, Feilong Bao, Guanglai Gao, Haizhou Li

To address this problem, we propose a new training scheme for Tacotron-based TTS, referred to as WaveTTS, that has 2 loss functions: 1) time-domain loss, denoted as the waveform loss, that measures the distortion between the natural and generated waveform; and 2) frequency-domain loss, that measures the Mel-scale acoustic feature loss between the natural and generated acoustic features.

Transforming Spectrum and Prosody for Emotional Voice Conversion with Non-Parallel Training Data

1 code implementation1 Feb 2020 Kun Zhou, Berrak Sisman, Haizhou Li

Many studies require parallel speech data between different emotional patterns, which is not practical in real life.

Voice Conversion

Deep Spiking Neural Networks for Large Vocabulary Automatic Speech Recognition

1 code implementation19 Nov 2019 Jibin Wu, Emre Yilmaz, Malu Zhang, Haizhou Li, Kay Chen Tan

The brain-inspired spiking neural networks (SNN) closely mimic the biological neural networks and can operate on low-power neuromorphic hardware with spike-based computation.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Teacher-Student Training for Robust Tacotron-based TTS

no code implementations7 Nov 2019 Rui Liu, Berrak Sisman, Jingdong Li, Feilong Bao, Guanglai Gao, Haizhou Li

We first train a Tacotron2-based TTS model by always providing natural speech frames to the decoder, that serves as a teacher model.

Knowledge Distillation

End-to-End Code-Switching ASR for Low-Resourced Language Pairs

no code implementations27 Sep 2019 Xianghu Yue, Grandee Lee, Emre Yilmaz, Fang Deng, Haizhou Li

In this work, we describe an E2E ASR pipeline for the recognition of CS speech in which a low-resourced language is mixed with a high resourced language.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Automatic Lyrics Alignment and Transcription in Polyphonic Music: Does Background Music Help?

no code implementations23 Sep 2019 Chitralekha Gupta, Emre Yilmaz, Haizhou Li

Automatic lyrics alignment and transcription in polyphonic music are challenging tasks because the singing vocals are corrupted by the background music.

Audio and Speech Processing Sound

Neural Population Coding for Effective Temporal Classification

no code implementations12 Sep 2019 Zihan Pan, Jibin Wu, Yansong Chua, Malu Zhang, Haizhou Li

We show that, with population neural codings, the encoded patterns are linearly separable using the Support Vector Machine (SVM).

Classification General Classification

An efficient and perceptually motivated auditory neural encoding and decoding algorithm for spiking neural networks

no code implementations3 Sep 2019 Zihan Pan, Yansong Chua, Jibin Wu, Malu Zhang, Haizhou Li, Eliathamby Ambikairajah

The neural encoding scheme, that we call Biologically plausible Auditory Encoding (BAE), emulates the functions of the perceptual components of the human auditory system, that include the cochlear filter bank, the inner hair cells, auditory masking effects from psychoacoustic models, and the spike neural encoding by the auditory nerve.

Benchmarking speech-recognition +1

A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks

1 code implementation2 Jul 2019 Jibin Wu, Yansong Chua, Malu Zhang, Guoqi Li, Haizhou Li, Kay Chen Tan

Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures.

Event-based vision

Acoustic Modeling for Automatic Lyrics-to-Audio Alignment

no code implementations25 Jun 2019 Chitralekha Gupta, Emre Yilmaz, Haizhou Li

In this work, we propose (1) using additional speech and music-informed features and (2) adapting the acoustic models trained on a large amount of solo singing vocals towards polyphonic music using a small amount of in-domain data.

Large-Scale Speaker Diarization of Radio Broadcast Archives

no code implementations19 Jun 2019 Emre Yilmaz, Adem Derinel, Zhou Kun, Henk van den Heuvel, Niko Brummer, Haizhou Li, David A. van Leeuwen

This paper describes our initial efforts to build a large-scale speaker diarization (SD) and identification system on a recently digitized radio broadcast archive from the Netherlands which has more than 6500 audio tapes with 3000 hours of Frisian-Dutch speech recorded between 1950-2016.

speaker-diarization Speaker Diarization +1

Multi-Graph Decoding for Code-Switching ASR

no code implementations18 Jun 2019 Emre Yilmaz, Samuel Cohen, Xianghu Yue, David van Leeuwen, Haizhou Li

This archive contains recordings with monolingual Frisian and Dutch speech segments as well as Frisian-Dutch CS speech, hence the recognition performance on monolingual segments is also vital for accurate transcriptions.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

VQVAE Unsupervised Unit Discovery and Multi-scale Code2Spec Inverter for Zerospeech Challenge 2019

no code implementations27 May 2019 Andros Tjandra, Berrak Sisman, Mingyang Zhang, Sakriani Sakti, Haizhou Li, Satoshi Nakamura

Our proposed approach significantly improved the intelligibility (in CER), the MOS, and discrimination ABX scores compared to the official ZeroSpeech 2019 baseline or even the topline.

Joint training framework for text-to-speech and voice conversion using multi-source Tacotron and WaveNet

no code implementations29 Mar 2019 Mingyang Zhang, Xin Wang, Fuming Fang, Haizhou Li, Junichi Yamagishi

We propose using an extended model architecture of Tacotron, that is a multi-source sequence-to-sequence model with a dual attention mechanism as the shared model for both the TTS and VC tasks.

Speech Synthesis Voice Conversion

Deep Spiking Neural Network with Spike Count based Learning Rule

no code implementations15 Feb 2019 Jibin Wu, Yansong Chua, Malu Zhang, Qu Yang, Guoqi Li, Haizhou Li

Deep spiking neural networks (SNNs) support asynchronous event-driven computation, massive parallelism and demonstrate great potential to improve the energy efficiency of its synchronous analog counterpart.

On the End-to-End Solution to Mandarin-English Code-switching Speech Recognition

1 code implementation1 Nov 2018 Zhiping Zeng, Yerbolat Khassanov, Van Tung Pham, Hai-Hua Xu, Eng Siong Chng, Haizhou Li

Code-switching (CS) refers to a linguistic phenomenon where a speaker uses different languages in an utterance or between alternating utterances.

Data Augmentation Language Identification +3

Generative x-vectors for text-independent speaker verification

no code implementations17 Sep 2018 Longting Xu, Rohan Kumar Das, Emre Yilmaz, Jichen Yang, Haizhou Li

Speaker verification (SV) systems using deep neural network embeddings, so-called the x-vector systems, are becoming popular due to its good performance superior to the i-vector systems.

Text-Independent Speaker Verification

Is Neuromorphic MNIST neuromorphic? Analyzing the discriminative power of neuromorphic datasets in the time domain

no code implementations3 Jul 2018 Laxmi R. Iyer, Yansong Chua, Haizhou Li

We also use this SNN for further experiments on N-MNIST to show that rate based SNNs perform better, and precise spike timings are not important in N-MNIST.

Report of NEWS 2018 Named Entity Transliteration Shared Task

no code implementations WS 2018 Nancy Chen, Rafael E. Banchs, Min Zhang, Xiangyu Duan, Haizhou Li

This report presents the results from the Named Entity Transliteration Shared Task conducted as part of The Seventh Named Entities Workshop (NEWS 2018) held at ACL 2018 in Melbourne, Australia.

Information Retrieval Transliteration

Learning Acoustic Word Embeddings with Temporal Context for Query-by-Example Speech Search

no code implementations10 Jun 2018 Yougen Yuan, Cheung-Chi Leung, Lei Xie, Hongjie Chen, Bin Ma, Haizhou Li

We also find that it is important to have sufficient speech segment pairs to train the deep CNN for effective acoustic word embeddings.

Dynamic Time Warping Word Embeddings

A Multi-State Diagnosis and Prognosis Framework with Feature Learning for Tool Condition Monitoring

no code implementations30 Apr 2018 Chong Zhang, Geok Soon Hong, Jun-Hong Zhou, Kay Chen Tan, Haizhou Li, Huan Xu, Jihoon Hong, Hian-Leng Chan

For fault diagnosis, a cost-sensitive deep belief network (namely ECS-DBN) is applied to deal with the imbalanced data problem for tool state estimation.

Representation Learning

A Cost-Sensitive Deep Belief Network for Imbalanced Classification

no code implementations28 Apr 2018 Chong Zhang, Kay Chen Tan, Haizhou Li, Geok Soon Hong

Adaptive differential evolution optimization is implemented as the optimization algorithm that automatically updates its corresponding parameters without the need of prior domain knowledge.

Classification General Classification +1

Statistical Parametric Speech Synthesis Using Generative Adversarial Networks Under A Multi-task Learning Framework

4 code implementations6 Jul 2017 Shan Yang, Lei Xie, Xiao Chen, Xiaoyan Lou, Xuan Zhu, Dong-Yan Huang, Haizhou Li

In this paper, we aim at improving the performance of synthesized speech in statistical parametric speech synthesis (SPSS) based on a generative adversarial network (GAN).


Spoofing detection under noisy conditions: a preliminary investigation and an initial database

no code implementations9 Feb 2016 Xiaohai Tian, Zhizheng Wu, Xiong Xiao, Eng Siong Chng, Haizhou Li

To simulate the real-life scenarios, we perform a preliminary investigation of spoofing detection under additive noisy conditions, and also describe an initial database for this task.

Speaker Verification

Fantastic 4 system for NIST 2015 Language Recognition Evaluation

no code implementations5 Feb 2016 Kong Aik Lee, Ville Hautamäki, Anthony Larcher, Wei Rao, Hanwu Sun, Trung Hieu Nguyen, Guangsen Wang, Aleksandr Sizov, Ivan Kukanov, Amir Poorjam, Trung Ngo Trong, Xiong Xiao, Cheng-Lin Xu, Hai-Hua Xu, Bin Ma, Haizhou Li, Sylvain Meignier

This article describes the systems jointly submitted by Institute for Infocomm (I$^2$R), the Laboratoire d'Informatique de l'Universit\'e du Maine (LIUM), Nanyang Technology University (NTU) and the University of Eastern Finland (UEF) for 2015 NIST Language Recognition Evaluation (LRE).


Cannot find the paper you are looking for? You can Submit a new open access paper.