Search Results for author: Haizhou Li

Found 195 papers, 77 papers with code

Target Speech Diarization with Multimodal Prompts

no code implementations11 Jun 2024 Yidi Jiang, Ruijie Tao, Zhengyang Chen, Yanmin Qian, Haizhou Li

Extending to target speech diarization, we detect ``when target event occurs'' according to the semantic characteristics of speech.

speaker-diarization Speaker Diarization

Autoregressive Diffusion Transformer for Text-to-Speech Synthesis

no code implementations8 Jun 2024 Zhijun Liu, Shuai Wang, Sho Inoue, Qibing Bai, Haizhou Li

Our experiments reveal that employing Integral Kullback-Leibler (IKL) divergence for distillation at each autoregressive step significantly boosts the perceived quality of the samples.

Audio Generation Decoder +2

How Do Neural Spoofing Countermeasures Detect Partially Spoofed Audio?

no code implementations4 Jun 2024 Tianchi Liu, Lin Zhang, Rohan Kumar Das, Yi Ma, Ruijie Tao, Haizhou Li

Recent work shows that countermeasures (CMs) trained on partially spoofed audio can effectively detect such spoofing.

Decision Making Sentence

TS-Align: A Teacher-Student Collaborative Framework for Scalable Iterative Finetuning of Large Language Models

no code implementations30 May 2024 Chen Zhang, Chengguang Tang, Dading Chong, Ke Shi, Guohua Tang, Feng Jiang, Haizhou Li

This automatic mining process is efficiently accomplished through the collaboration between a large-scale teacher model and a small-scale student model.

Instruction Following

Unveiling the Achilles' Heel of NLG Evaluators: A Unified Adversarial Framework Driven by Large Language Models

no code implementations23 May 2024 Yiming Chen, Chen Zhang, Danqing Luo, Luis Fernando D'Haro, Robby T. Tan, Haizhou Li

Specifically, inspired by the recent success of large language models (LLMs) in text generation and evaluation, we adopt strong LLMs as both the data generator and gold evaluator.

nlg evaluation Text Generation

Mamba in Speech: Towards an Alternative to Self-Attention

no code implementations21 May 2024 Xiangyu Zhang, Qiquan Zhang, Hexin Liu, Tianyi Xiao, Xinyuan Qian, Beena Ahmed, Eliathamby Ambikairajah, Haizhou Li, Julien Epps

Moreover, experiments demonstrate the effectiveness of BiMamba as an alternative to the self-attention module in Transformer and its derivates, particularly for the semantic-aware task.

Speech Enhancement speech-recognition +1

Incorporating External Knowledge and Goal Guidance for LLM-based Conversational Recommender Systems

no code implementations3 May 2024 Chuang Li, Yang Deng, Hengchang Hu, Min-Yen Kan, Haizhou Li

This paper aims to efficiently enable large language models (LLMs) to use external knowledge and goal guidance in conversational recommender system (CRS) tasks.

Informativeness Recommendation Systems

Audio-Visual Target Speaker Extraction with Reverse Selective Auditory Attention

no code implementations29 Apr 2024 Ruijie Tao, Xinyuan Qian, Yidi Jiang, Junjie Li, Jiadong Wang, Haizhou Li

To this end, we propose a novel reverse selective auditory attention mechanism, which can suppress interference speakers and non-speech signals to avoid incorrect speaker extraction.

Target Speaker Extraction

Voice Conversion Augmentation for Speaker Recognition on Defective Datasets

no code implementations1 Apr 2024 Ruijie Tao, Zhan Shi, Yidi Jiang, Tianchi Liu, Haizhou Li

Our experimental results on three created datasets demonstrated that VCA-NN effectively mitigates these dataset problems, which provides a new direction for handling the speaker recognition problems from the data aspect.

Speaker Recognition Voice Conversion

CrossTune: Black-Box Few-Shot Classification with Label Enhancement

no code implementations19 Mar 2024 Danqing Luo, Chen Zhang, Yan Zhang, Haizhou Li

Training or finetuning large-scale language models (LLMs) requires substantial computation resources, motivating recent efforts to explore parameter-efficient adaptation to downstream tasks.

Few-Shot Text Classification In-Context Learning +2

Apollo: An Lightweight Multilingual Medical LLM towards Democratizing Medical AI to 6B People

1 code implementation6 Mar 2024 Xidong Wang, Nuo Chen, Junyin Chen, Yan Hu, Yidong Wang, Xiangbo Wu, Anningzhe Gao, Xiang Wan, Haizhou Li, Benyou Wang

Despite the vast repository of global medical knowledge predominantly being in English, local languages are crucial for delivering tailored healthcare services, particularly in areas with limited medical resources.

Event-Driven Learning for Spiking Neural Networks

no code implementations1 Mar 2024 Wenjie Wei, Malu Zhang, Jilin Zhang, Ammar Belatreche, Jibin Wu, Zijing Xu, Xuerui Qiu, Hong Chen, Yang Yang, Haizhou Li

Specifically, we introduce two novel event-driven learning methods: the spike-timing-dependent event-driven (STD-ED) and membrane-potential-dependent event-driven (MPD-ED) algorithms.

Text-guided HuBERT: Self-Supervised Speech Pre-training via Generative Adversarial Networks

no code implementations24 Feb 2024 Duo Ma, Xianghu Yue, Junyi Ao, Xiaoxue Gao, Haizhou Li

In this paper, we investigate a new way to pre-train such a joint speech-text model to learn enhanced speech representations and benefit various speech-related downstream tasks.

Pseudo Label Self-Supervised Learning

Computation and Parameter Efficient Multi-Modal Fusion Transformer for Cued Speech Recognition

no code implementations31 Jan 2024 Lei Liu, Li Liu, Haizhou Li

Cued Speech (CS) is a pure visual coding method used by hearing-impaired people that combines lip reading with several specific hand shapes to make the spoken language visible.

Lip Reading speech-recognition +1

LitE-SNN: Designing Lightweight and Efficient Spiking Neural Network through Spatial-Temporal Compressive Network Search and Joint Optimization

no code implementations26 Jan 2024 Qianhui Liu, Jiaqi Yan, Malu Zhang, Gang Pan, Haizhou Li

Spiking Neural Networks (SNNs) mimic the information-processing mechanisms of the human brain and are highly energy-efficient, making them well-suited for low-power edge devices.

Quantization

CoAVT: A Cognition-Inspired Unified Audio-Visual-Text Pre-Training Model for Multimodal Processing

no code implementations22 Jan 2024 Xianghu Yue, Xiaohai Tian, Lu Lu, Malu Zhang, Zhizheng Wu, Haizhou Li

To bridge the gap between modalities, CoAVT employs a query encoder, which contains a set of learnable query embeddings, and extracts the most informative audiovisual features of the corresponding text.

AudioCaps Audio-Visual Synchronization +4

Bridging Research and Readers: A Multi-Modal Automated Academic Papers Interpretation System

1 code implementation17 Jan 2024 Feng Jiang, Kuang Wang, Haizhou Li

In the contemporary information era, significantly accelerated by the advent of Large-scale Language Models, the proliferation of scientific literature is reaching unprecedented levels.

The NUS-HLT System for ICASSP2024 ICMC-ASR Grand Challenge

no code implementations26 Dec 2023 Meng Ge, Yizhou Peng, Yidi Jiang, Jingru Lin, Junyi Ao, Mehmet Sinan Yildirim, Shuai Wang, Haizhou Li, Mengling Feng

This paper summarizes our team's efforts in both tracks of the ICMC-ASR Challenge for in-car multi-channel automatic speech recognition.

Automatic Speech Recognition Data Augmentation +2

A Comprehensive Analysis of the Effectiveness of Large Language Models as Automatic Dialogue Evaluators

1 code implementation24 Dec 2023 Chen Zhang, Luis Fernando D'Haro, Yiming Chen, Malu Zhang, Haizhou Li

Yet, existing works on utilizing LLMs for automatic dialogue evaluation are limited in their scope in terms of the number of meta-evaluation datasets, mode of evaluation, coverage of LLMs, etc.

Dialogue Evaluation

Emotion Rendering for Conversational Speech Synthesis with Heterogeneous Graph-Based Context Modeling

1 code implementation19 Dec 2023 Rui Liu, Yifan Hu, Yi Ren, Xiang Yin, Haizhou Li

Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.

Contrastive Learning Speech Synthesis

Golden Gemini is All You Need: Finding the Sweet Spots for Speaker Verification

1 code implementation6 Dec 2023 Tianchi Liu, Kong Aik Lee, Qiongqiong Wang, Haizhou Li

We represent the stride space on a trellis diagram, and conduct a systematic study on the impact of temporal and frequency resolutions on the performance and further identify two optimal points, namely Golden Gemini, which serves as a guiding principle for designing 2D ResNet-based speaker verification models.

Speaker Verification

HuatuoGPT-II, One-stage Training for Medical Adaption of LLMs

1 code implementation16 Nov 2023 Junying Chen, Xidong Wang, Anningzhe Gao, Feng Jiang, Shunian Chen, Hongbo Zhang, Dingjie Song, Wenya Xie, Chuyi Kong, Jianquan Li, Xiang Wan, Haizhou Li, Benyou Wang

We validate the new protocol in the domains where proprietary LLMs like ChatGPT perform relatively poorly, such as Traditional Chinese Medicine.

Domain Adaptation Language Modelling

How Well Do Text Embedding Models Understand Syntax?

1 code implementation14 Nov 2023 Yan Zhang, Zhaopeng Feng, Zhiyang Teng, Zuozhu Liu, Haizhou Li

Text embedding models have significantly contributed to advancements in natural language processing by adeptly capturing semantic properties of textual data.

Selective HuBERT: Self-Supervised Pre-Training for Target Speaker in Clean and Mixture Speech

no code implementations8 Nov 2023 Jingru Lin, Meng Ge, Wupeng Wang, Haizhou Li, Mengling Feng

Self-supervised pre-trained speech models were shown effective for various downstream speech processing tasks.

LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks with TTFS Coding

no code implementations23 Oct 2023 Qu Yang, Malu Zhang, Jibin Wu, Kay Chen Tan, Haizhou Li

With TTFS coding, we can achieve up to orders of magnitude saving in computation over ANN and other rate-based SNNs.

Edge-computing Image Classification +2

Prompt-driven Target Speech Diarization

no code implementations23 Oct 2023 Yidi Jiang, Zhengyang Chen, Ruijie Tao, Liqun Deng, Yanmin Qian, Haizhou Li

We introduce a novel task named `target speech diarization', which seeks to determine `when target event occurred' within an audio signal.

Action Detection Activity Detection

Quantifying Self-diagnostic Atomic Knowledge in Chinese Medical Foundation Model: A Computational Analysis

1 code implementation18 Oct 2023 Yaxin Fan, Feng Jiang, Benyou Wang, Peifeng Li, Haizhou Li

Recent studies primarily focused on the quality of FMs evaluated by GPT-4 or their ability to pass medical exams, no studies have quantified the extent of self-diagnostic atomic knowledge stored in FMs' memory, which is the basis of foundation models to provide factual and reliable suggestions.

Instruction Following

LocSelect: Target Speaker Localization with an Auditory Selective Hearing Mechanism

no code implementations16 Oct 2023 Yu Chen, Xinyuan Qian, Zexu Pan, Kainan Chen, Haizhou Li

The prevailing noise-resistant and reverberation-resistant localization algorithms primarily emphasize separating and providing directional output for each speaker in multi-speaker scenarios, without association with the identity of speakers.

UNO-DST: Leveraging Unlabelled Data in Zero-Shot Dialogue State Tracking

1 code implementation16 Oct 2023 Chuang Li, Yan Zhang, Min-Yen Kan, Haizhou Li

Previous zero-shot dialogue state tracking (DST) methods only apply transfer learning, ignoring unlabelled data in the target domain.

Dialogue State Tracking Transfer Learning

AceGPT, Localizing Large Language Models in Arabic

1 code implementation21 Sep 2023 Huang Huang, Fei Yu, Jianqing Zhu, Xuening Sun, Hao Cheng, Dingjie Song, Zhihong Chen, Abdulmohsen Alharthi, Bang An, Juncai He, Ziche Liu, Zhiyi Zhang, Junying Chen, Jianquan Li, Benyou Wang, Lian Zhang, Ruoyu Sun, Xiang Wan, Haizhou Li, Jinchao Xu

This paper is devoted to the development of a localized Large Language Model (LLM) specifically for Arabic, a language imbued with unique cultural characteristics inadequately addressed by current mainstream models.

Instruction Following Language Modelling +2

Leveraging In-the-Wild Data for Effective Self-Supervised Pretraining in Speaker Recognition

1 code implementation21 Sep 2023 Shuai Wang, Qibing Bai, Qi Liu, Jianwei Yu, Zhengyang Chen, Bing Han, Yanmin Qian, Haizhou Li

Current speaker recognition systems primarily rely on supervised approaches, constrained by the scale of labeled datasets.

Speaker Recognition

FluentEditor: Text-based Speech Editing by Considering Acoustic and Prosody Consistency

1 code implementation21 Sep 2023 Rui Liu, Jiatian Xi, Ziyue Jiang, Haizhou Li

Text-based speech editing (TSE) techniques are designed to enable users to edit the output audio by modifying the input text transcript instead of the audio itself.

Emotion-Aware Prosodic Phrasing for Expressive Text-to-Speech

1 code implementation21 Sep 2023 Rui Liu, Bin Liu, Haizhou Li

Prosodic phrasing is crucial to the naturalness and intelligibility of end-to-end Text-to-Speech (TTS).

Spiking-LEAF: A Learnable Auditory front-end for Spiking Neural Networks

no code implementations18 Sep 2023 Zeyang Song, Jibin Wu, Malu Zhang, Mike Zheng Shou, Haizhou Li

Brain-inspired spiking neural networks (SNNs) have demonstrated great potential for temporal signal processing.

Keyword Spotting Speaker Identification

A Conversation is Worth A Thousand Recommendations: A Survey of Holistic Conversational Recommender Systems

1 code implementation14 Sep 2023 Chuang Li, Hengchang Hu, Yan Zhang, Min-Yen Kan, Haizhou Li

However, not all CRS approaches use human conversations as their source of interaction data; the majority of prior CRS work simulates interactions by exchanging entity-level information.

Language Modelling Recommendation Systems

EEG-Derived Voice Signature for Attended Speaker Detection

no code implementations28 Aug 2023 Hongxu Zhu, Siqi Cai, Yidi Jiang, Qiquan Zhang, Haizhou Li

\textit{Conclusion:} We conclude that it is possible to derive the attended speaker's voice signature from the EEG signals so as to detect the attended speaker in a listening brain.

EEG

TC-LIF: A Two-Compartment Spiking Neuron Model for Long-Term Sequential Modelling

1 code implementation25 Aug 2023 Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan

The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.

CMB: A Comprehensive Medical Benchmark in Chinese

1 code implementation17 Aug 2023 Xidong Wang, Guiming Hardy Chen, Dingjie Song, Zhiyi Zhang, Zhihong Chen, Qingying Xiao, Feng Jiang, Jianquan Li, Xiang Wan, Benyou Wang, Haizhou Li

We hope this benchmark provide first-hand experience in existing LLMs for medicine and also facilitate the widespread adoption and enhancement of medical LLMs within China.

GrammarGPT: Exploring Open-Source LLMs for Native Chinese Grammatical Error Correction with Supervised Fine-Tuning

1 code implementation26 Jul 2023 Yaxin Fan, Feng Jiang, Peifeng Li, Haizhou Li

Although model parameters are 20x larger than the SOTA baseline, the required amount of data for instruction tuning is 1200x smaller, illustrating the potential of open-source LLMs on native CGEC.

Grammatical Error Correction

NeuroHeed: Neuro-Steered Speaker Extraction using EEG Signals

no code implementations26 Jul 2023 Zexu Pan, Marvin Borsdorf, Siqi Cai, Tanja Schultz, Haizhou Li

We propose both an offline and an online NeuroHeed, with the latter designed for real-time inference.

EEG

Is ChatGPT Involved in Texts? Measure the Polish Ratio to Detect ChatGPT-Generated Text

2 code implementations21 Jul 2023 Lingyi Yang, Feng Jiang, Haizhou Li

Despite this, most previous studies have been predominantly geared towards creating detectors that differentiate between purely ChatGPT-generated texts and human-authored texts.

Misinformation Text Generation

Self-Supervised Acoustic Word Embedding Learning via Correspondence Transformer Encoder

no code implementations19 Jul 2023 Jingru Lin, Xianghu Yue, Junyi Ao, Haizhou Li

We train the model based on the idea that different realisations of the same word should be close in the underlying embedding space.

Word Embeddings

Long Short-term Memory with Two-Compartment Spiking Neuron

no code implementations14 Jul 2023 Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan

The identification of sensory cues associated with potential opportunities and dangers is frequently complicated by unrelated events that separate useful cues by long delays.

A Hybrid Neural Coding Approach for Pattern Recognition with Spiking Neural Networks

1 code implementation26 May 2023 Xinyi Chen, Qu Yang, Jibin Wu, Haizhou Li, Kay Chen Tan

As an initial exploration in this direction, we propose a hybrid neural coding and learning framework, which encompasses a neural coding zoo with diverse neural coding schemes discovered in neuroscience.

Image Classification

Betray Oneself: A Novel Audio DeepFake Detection Model via Mono-to-Stereo Conversion

1 code implementation25 May 2023 Rui Liu, Jinhua Zhang, Guanglai Gao, Haizhou Li

In this paper, we propose a novel ADD model, termed as M2S-ADD, that attempts to discover audio authenticity cues during the mono-to-stereo conversion process.

Audio Deepfake Detection DeepFake Detection +2

Advancing Topic Segmentation and Outline Generation in Chinese Texts: The Paragraph-level Topic Representation, Corpus, and Benchmark

1 code implementation24 May 2023 Feng Jiang, Weihao Liu, Xiaomin Chu, Peifeng Li, Qiaoming Zhu, Haizhou Li

Topic segmentation and outline generation strive to divide a document into coherent topic sections and generate corresponding subheadings, unveiling the discourse topic structure of a document.

Discourse Parsing Information Retrieval +2

HuatuoGPT, towards Taming Language Model to Be a Doctor

1 code implementation24 May 2023 Hongbo Zhang, Junying Chen, Feng Jiang, Fei Yu, Zhihong Chen, Jianquan Li, Guiming Chen, Xiangbo Wu, Zhiyi Zhang, Qingying Xiao, Xiang Wan, Benyou Wang, Haizhou Li

Experimental results demonstrate that HuatuoGPT achieves state-of-the-art results in performing medical consultation among open-source LLMs in GPT-4 evaluation, human evaluation, and medical benchmark datasets.

Language Modelling Large Language Model

Topic-driven Distant Supervision Framework for Macro-level Discourse Parsing

no code implementations23 May 2023 Feng Jiang, Longwang He, Peifeng Li, Qiaoming Zhu, Haizhou Li

Discourse parsing, the task of analyzing the internal rhetorical structure of texts, is a challenging problem in natural language processing.

Discourse Parsing Transfer Learning

Target Active Speaker Detection with Audio-visual Cues

1 code implementation22 May 2023 Yidi Jiang, Ruijie Tao, Zexu Pan, Haizhou Li

To benefit from both facial cue and reference speech, we propose the Target Speaker TalkNet (TS-TalkNet), which leverages a pre-enrolled speaker embedding to complement the audio-visual synchronization cue in detecting whether the target speaker is speaking.

Audio-Visual Synchronization

Dynamic Transformers Provide a False Sense of Efficiency

1 code implementation20 May 2023 Yiming Chen, Simin Chen, Zexin Li, Wei Yang, Cong Liu, Robby T. Tan, Haizhou Li

Despite much success in natural language processing (NLP), pre-trained language models typically lead to a high computational cost during inference.

Adversarial Attack

Uncovering the Potential of ChatGPT for Discourse Analysis in Dialogue: An Empirical Study

1 code implementation15 May 2023 Yaxin Fan, Feng Jiang, Peifeng Li, Haizhou Li

In this paper, we aim to systematically inspect ChatGPT's performance in two discourse analysis tasks: topic segmentation and discourse parsing, focusing on its deep semantic understanding of linear and hierarchical discourse structures underlying dialogue.

Discourse Parsing In-Context Learning +2

Accented Text-to-Speech Synthesis with Limited Data

no code implementations8 May 2023 Xuehao Zhou, Mingyang Zhang, Yi Zhou, Zhizheng Wu, Haizhou Li

Both objective and subjective evaluation results show that the accented TTS front-end fine-tuned with a small accented phonetic lexicon (5k words) effectively handles the phonetic variation of accents, while the accented TTS acoustic model fine-tuned with a limited amount of accented speech data (approximately 3 minutes) effectively improves the prosodic rendering including pitch and duration.

Speech Synthesis Text-To-Speech Synthesis

Seeing What You Said: Talking Face Generation Guided by a Lip Reading Expert

1 code implementation CVPR 2023 Jiadong Wang, Xinyuan Qian, Malu Zhang, Robby T. Tan, Haizhou Li

To address the problem, we propose using a lip-reading expert to improve the intelligibility of the generated lip regions by penalizing the incorrect generation results.

Contrastive Learning Lip Reading +1

TTS-Guided Training for Accent Conversion Without Parallel Data

no code implementations20 Dec 2022 Yi Zhou, Zhizheng Wu, Mingyang Zhang, Xiaohai Tian, Haizhou Li

Specifically, a text-to-speech (TTS) system is first pretrained with target-accented speech data.

Decoder

PoE: a Panel of Experts for Generalized Automatic Dialogue Assessment

no code implementations18 Dec 2022 Chen Zhang, Luis Fernando D'Haro, Qiquan Zhang, Thomas Friedrichs, Haizhou Li

To tackle the multi-domain dialogue evaluation task, we propose a Panel of Experts (PoE), a multitask network that consists of a shared transformer encoder and a collection of lightweight adapters.

Data Augmentation Dialogue Evaluation +4

Relational Sentence Embedding for Flexible Semantic Matching

1 code implementation17 Dec 2022 Bin Wang, Haizhou Li

We present Relational Sentence Embedding (RSE), a new paradigm to further discover the potential of sentence embeddings.

Relation Semantic Textual Similarity +3

Minimizing the Accumulated Trajectory Error to Improve Dataset Distillation

3 code implementations CVPR 2023 Jiawei Du, Yidi Jiang, Vincent Y. F. Tan, Joey Tianyi Zhou, Haizhou Li

To mitigate the adverse impact of this accumulated trajectory error, we propose a novel approach that encourages the optimization algorithm to seek a flat trajectory.

Dataset Distillation - 1IPC Neural Architecture Search

Self-Transriber: Few-shot Lyrics Transcription with Self-training

no code implementations18 Nov 2022 Xiaoxue Gao, Xianghu Yue, Haizhou Li

The current lyrics transcription approaches heavily rely on supervised learning with labeled data, but such data are scarce and manual labeling of singing is expensive.

Few-Shot Learning

Generate, Discriminate and Contrast: A Semi-Supervised Sentence Representation Learning Framework

1 code implementation30 Oct 2022 Yiming Chen, Yan Zhang, Bin Wang, Zuozhu Liu, Haizhou Li

Most sentence embedding techniques heavily rely on expensive human-annotated sentence pairs as the supervised signals.

Domain Adaptation Sentence +3

token2vec: A Joint Self-Supervised Pre-training Framework Using Unpaired Speech and Text

no code implementations30 Oct 2022 Xianghu Yue, Junyi Ao, Xiaoxue Gao, Haizhou Li

Firstly, due to the distinct characteristics between speech and text modalities, where speech is continuous while text is discrete, we first discretize speech into a sequence of discrete speech tokens to solve the modality mismatch problem.

intent-classification Intent Classification +1

Speaker recognition with two-step multi-modal deep cleansing

1 code implementation28 Oct 2022 Ruijie Tao, Kong Aik Lee, Zhan Shi, Haizhou Li

However, noisy samples (i. e., with wrong labels) in the training set induce confusion and cause the network to learn the incorrect representation.

Representation Learning Speaker Recognition +1

FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis

1 code implementation27 Oct 2022 Yifan Hu, Rui Liu, Guanglai Gao, Haizhou Li

Therefore, we propose a novel expressive conversational TTS model, termed as FCTalker, that learn the fine and coarse grained context dependency at the same time during speech generation.

Speech Synthesis

Explicit Intensity Control for Accented Text-to-speech

no code implementations27 Oct 2022 Rui Liu, Haolin Zuo, De Hu, Guanglai Gao, Haizhou Li

Accented text-to-speech (TTS) synthesis seeks to generate speech with an accent (L2) as a variant of the standard version (L1).

speech-recognition Speech Recognition

Self-Supervised Training of Speaker Encoder with Multi-Modal Diverse Positive Pairs

no code implementations27 Oct 2022 Ruijie Tao, Kong Aik Lee, Rohan Kumar Das, Ville Hautamäki, Haizhou Li

We study a novel neural architecture and its training strategies of speaker encoder for speaker recognition without using any identity labels.

Contrastive Learning Self-Supervised Learning +1

FineD-Eval: Fine-grained Automatic Dialogue-Level Evaluation

2 code implementations25 Oct 2022 Chen Zhang, Luis Fernando D'Haro, Qiquan Zhang, Thomas Friedrichs, Haizhou Li

Recent model-based reference-free metrics for open-domain dialogue evaluation exhibit promising correlations with human judgment.

Dialogue Evaluation

Mixed-EVC: Mixed Emotion Synthesis and Control in Voice Conversion

no code implementations25 Oct 2022 Kun Zhou, Berrak Sisman, Carlos Busso, Bin Ma, Haizhou Li

To achieve this, we propose a novel EVC framework, Mixed-EVC, which only leverages discrete emotion training labels.

Attribute Voice Conversion

Analyzing and Evaluating Faithfulness in Dialogue Summarization

1 code implementation21 Oct 2022 Bin Wang, Chen Zhang, Yan Zhang, Yiming Chen, Haizhou Li

The factual correctness of summaries has the highest priority before practical applications.

Text Summarization

Training Spiking Neural Networks with Local Tandem Learning

1 code implementation10 Oct 2022 Qu Yang, Jibin Wu, Malu Zhang, Yansong Chua, Xinchao Wang, Haizhou Li

The LTL rule follows the teacher-student learning approach by mimicking the intermediate feature representations of a pre-trained ANN.

The Kriston AI System for the VoxCeleb Speaker Recognition Challenge 2022

no code implementations23 Sep 2022 Qutang Cai, Guoqiang Hong, Zhijian Ye, Ximin Li, Haizhou Li

This technical report describes our system for track 1, 2 and 4 of the VoxCeleb Speaker Recognition Challenge 2022 (VoxSRC-22).

Action Detection Activity Detection +2

Controllable Accented Text-to-Speech Synthesis

no code implementations22 Sep 2022 Rui Liu, Berrak Sisman, Guanglai Gao, Haizhou Li

Accented TTS synthesis is challenging as L2 is different from L1 in both in terms of phonetic rendering and prosody pattern.

Speech Synthesis Text-To-Speech Synthesis

Speech Synthesis with Mixed Emotions

no code implementations11 Aug 2022 Kun Zhou, Berrak Sisman, Rajib Rana, B. W. Schuller, Haizhou Li

We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.

Attribute Emotional Speech Synthesis

PoLyScriber: Integrated Fine-tuning of Extractor and Lyrics Transcriber for Polyphonic Music

no code implementations15 Jul 2022 Xiaoxue Gao, Chitralekha Gupta, Haizhou Li

Lyrics transcription of polyphonic music is challenging as the background music affects lyrics intelligibility.

Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on Data-Driven Deep Learning

1 code implementation15 Jun 2022 Rui Liu, Berrak Sisman, Björn Schuller, Guanglai Gao, Haizhou Li

In this paper, we propose a data-driven deep learning model, i. e. StrengthNet, to improve the generalization of emotion strength assessment for seen and unseen speech.

Attribute Emotion Classification +2

M3ED: Multi-modal Multi-scene Multi-label Emotional Dialogue Database

1 code implementation ACL 2022 Jinming Zhao, Tenggan Zhang, Jingwen Hu, Yuchen Liu, Qin Jin, Xinchao Wang, Haizhou Li

In this work, we propose a Multi-modal Multi-scene Multi-label Emotional Dialogue dataset, M3ED, which contains 990 dyadic emotional dialogues from 56 different TV series, a total of 9, 082 turns and 24, 449 utterances.

Cultural Vocal Bursts Intensity Prediction Emotion Recognition

Music-robust Automatic Lyrics Transcription of Polyphonic Music

1 code implementation7 Apr 2022 Xiaoxue Gao, Chitralekha Gupta, Haizhou Li

To improve the robustness of lyrics transcription to the background music, we propose a strategy of combining the features that emphasize the singing vocals, i. e. music-removed features that represent singing vocal extracted features, and the features that capture the singing vocals as well as the background music, i. e. music-present features.

Automatic Lyrics Transcription Language Modelling

Genre-conditioned Acoustic Models for Automatic Lyrics Transcription of Polyphonic Music

no code implementations7 Apr 2022 Xiaoxue Gao, Chitralekha Gupta, Haizhou Li

Lyrics transcription of polyphonic music is challenging not only because the singing vocals are corrupted by the background music, but also because the background music and the singing style vary across music genres, such as pop, metal, and hip hop, which affects lyrics intelligibility of the song in different ways.

Automatic Lyrics Transcription

Speaker Extraction with Co-Speech Gestures Cue

1 code implementation31 Mar 2022 Zexu Pan, Xinyuan Qian, Haizhou Li

Speaker extraction seeks to extract the clean speech of a target speaker from a multi-talker mixture speech.

Speech Separation

A Hybrid Continuity Loss to Reduce Over-Suppression for Time-domain Target Speaker Extraction

1 code implementation31 Mar 2022 Zexu Pan, Meng Ge, Haizhou Li

We propose a hybrid continuity loss function for time-domain speaker extraction algorithms to settle the over-suppression problem.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT

1 code implementation29 Mar 2022 Rui Wang, Qibing Bai, Junyi Ao, Long Zhou, Zhixiang Xiong, Zhihua Wei, Yu Zhang, Tom Ko, Haizhou Li

LightHuBERT outperforms the original HuBERT on ASR and five SUPERB tasks with the HuBERT size, achieves comparable performance to the teacher model in most tasks with a reduction of 29% parameters, and obtains a $3. 5\times$ compression ratio in three SUPERB tasks, e. g., automatic speaker verification, keyword spotting, and intent classification, with a slight accuracy loss.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +6

L-SpEx: Localized Target Speaker Extraction

1 code implementation21 Feb 2022 Meng Ge, Chenglin Xu, Longbiao Wang, Eng Siong Chng, Jianwu Dang, Haizhou Li

Speaker extraction aims to extract the target speaker's voice from a multi-talker speech mixture given an auxiliary reference utterance.

Target Speaker Extraction

MFA: TDNN with Multi-scale Frequency-channel Attention for Text-independent Speaker Verification with Short Utterances

no code implementations3 Feb 2022 Tianchi Liu, Rohan Kumar Das, Kong Aik Lee, Haizhou Li

The time delay neural network (TDNN) represents one of the state-of-the-art of neural solutions to text-independent speaker verification.

Text-Independent Speaker Verification

MDD-Eval: Self-Training on Augmented Data for Multi-Domain Dialogue Evaluation

1 code implementation14 Dec 2021 Chen Zhang, Luis Fernando D'Haro, Thomas Friedrichs, Haizhou Li

Chatbots are designed to carry out human-like conversations across different domains, such as general chit-chat, knowledge exchange, and persona-grounded conversations.

Dialogue Evaluation

HLT-NUS SUBMISSION FOR 2020 NIST Conversational Telephone Speech SRE

3 code implementations12 Nov 2021 Rohan Kumar Das, Ruijie Tao, Haizhou Li

This work provides a brief description of Human Language Technology (HLT) Laboratory, National University of Singapore (NUS) system submission for 2020 NIST conversational telephone speech (CTS) speaker recognition evaluation (SRE).

Domain Adaptation Speaker Recognition

MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal Emotion Recognition

no code implementations27 Oct 2021 Jinming Zhao, Ruichen Li, Qin Jin, Xinchao Wang, Haizhou Li

Multimodal emotion recognition study is hindered by the lack of labelled corpora in terms of scale and diversity, due to the high annotation cost and label ambiguity.

Emotion Classification Multimodal Emotion Recognition +1

Disentanglement of Emotional Style and Speaker Identity for Expressive Voice Conversion

no code implementations20 Oct 2021 Zongyang Du, Berrak Sisman, Kun Zhou, Haizhou Li

Expressive voice conversion performs identity conversion for emotional speakers by jointly converting speaker identity and emotional style.

Disentanglement Voice Conversion

DeepA: A Deep Neural Analyzer For Speech And Singing Vocoding

no code implementations13 Oct 2021 Sergey Nikonorov, Berrak Sisman, Mingyang Zhang, Haizhou Li

At the same time, as the deep neural analyzer is learnable, it is expected to be more accurate for signal reconstruction and manipulation, and generalizable from speech to singing.

Speech Synthesis Voice Conversion

Ego4D: Around the World in 3,000 Hours of Egocentric Video

8 code implementations CVPR 2022 Kristen Grauman, Andrew Westbury, Eugene Byrne, Zachary Chavis, Antonino Furnari, Rohit Girdhar, Jackson Hamburger, Hao Jiang, Miao Liu, Xingyu Liu, Miguel Martin, Tushar Nagarajan, Ilija Radosavovic, Santhosh Kumar Ramakrishnan, Fiona Ryan, Jayant Sharma, Michael Wray, Mengmeng Xu, Eric Zhongcong Xu, Chen Zhao, Siddhant Bansal, Dhruv Batra, Vincent Cartillier, Sean Crane, Tien Do, Morrie Doulaty, Akshay Erapalli, Christoph Feichtenhofer, Adriano Fragomeni, Qichen Fu, Abrham Gebreselasie, Cristina Gonzalez, James Hillis, Xuhua Huang, Yifei HUANG, Wenqi Jia, Weslie Khoo, Jachym Kolar, Satwik Kottur, Anurag Kumar, Federico Landini, Chao Li, Yanghao Li, Zhenqiang Li, Karttikeya Mangalam, Raghava Modhugu, Jonathan Munro, Tullie Murrell, Takumi Nishiyasu, Will Price, Paola Ruiz Puentes, Merey Ramazanova, Leda Sari, Kiran Somasundaram, Audrey Southerland, Yusuke Sugano, Ruijie Tao, Minh Vo, Yuchen Wang, Xindi Wu, Takuma Yagi, Ziwei Zhao, Yunyi Zhu, Pablo Arbelaez, David Crandall, Dima Damen, Giovanni Maria Farinella, Christian Fuegen, Bernard Ghanem, Vamsi Krishna Ithapu, C. V. Jawahar, Hanbyul Joo, Kris Kitani, Haizhou Li, Richard Newcombe, Aude Oliva, Hyun Soo Park, James M. Rehg, Yoichi Sato, Jianbo Shi, Mike Zheng Shou, Antonio Torralba, Lorenzo Torresani, Mingfei Yan, Jitendra Malik

We introduce Ego4D, a massive-scale egocentric video dataset and benchmark suite.

De-identification Ethics

StrengthNet: Deep Learning-based Emotion Strength Assessment for Emotional Speech Synthesis

1 code implementation7 Oct 2021 Rui Liu, Berrak Sisman, Haizhou Li

The emotion strength of synthesized speech can be controlled flexibly using a strength descriptor, which is obtained by an emotion attribute ranking function.

Attribute Data Augmentation +2

VisualTTS: TTS with Accurate Lip-Speech Synchronization for Automatic Voice Over

no code implementations7 Oct 2021 Junchen Lu, Berrak Sisman, Rui Liu, Mingyang Zhang, Haizhou Li

The proposed VisualTTS adopts two novel mechanisms that are 1) textual-visual attention, and 2) visual fusion strategy during acoustic decoding, which both contribute to forming accurate alignment between the input text content and lip motion in input lip sequence.

Speech Synthesis

Revisiting Self-Training for Few-Shot Learning of Language Model

1 code implementation EMNLP 2021 Yiming Chen, Yan Zhang, Chen Zhang, Grandee Lee, Ran Cheng, Haizhou Li

In this work, we revisit the self-training technique for language model fine-tuning and present a state-of-the-art prompt-based few-shot learner, SFLM.

Benchmarking Few-Shot Learning +6

PL-EESR: Perceptual Loss Based END-TO-END Robust Speaker Representation Extraction

1 code implementation3 Oct 2021 Yi Ma, Kong Aik Lee, Ville Hautamaki, Haizhou Li

Speech enhancement aims to improve the perceptual quality of the speech signal by suppression of the background noise.

Speaker Identification Speaker Verification +1

USEV: Universal Speaker Extraction with Visual Cue

1 code implementation30 Sep 2021 Zexu Pan, Meng Ge, Haizhou Li

The speaker extraction algorithm requires an auxiliary reference, such as a video recording or a pre-recorded speech, to form top-down auditory attention on the target speaker.

Exploring Teacher-Student Learning Approach for Multi-lingual Speech-to-Intent Classification

no code implementations28 Sep 2021 Bidisha Sharma, Maulik Madhavi, Xuehao Zhou, Haizhou Li

In particular, we use synthesized speech generated from an English-Mandarin text corpus for analysis and training of a multi-lingual intent classification model.

Classification intent-classification +1

Knowledge Distillation from BERT Transformer to Speech Transformer for Intent Classification

1 code implementation5 Aug 2021 Yidi Jiang, Bidisha Sharma, Maulik Madhavi, Haizhou Li

In this regard, we leverage the reliable and widely used bidirectional encoder representations from transformers (BERT) model as a language model and transfer the knowledge to build an acoustic model for intent classification using the speech.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +7

Serialized Multi-Layer Multi-Head Attention for Neural Speaker Embedding

no code implementations14 Jul 2021 Hongning Zhu, Kong Aik Lee, Haizhou Li

Instead of utilizing multi-head attention in parallel, the proposed serialized multi-layer multi-head attention is designed to aggregate and propagate attentive statistics from one layer to the next in a serialized manner.

Text-Independent Speaker Verification

Selective Listening by Synchronizing Speech with Lips

1 code implementation14 Jun 2021 Zexu Pan, Ruijie Tao, Chenglin Xu, Haizhou Li

A speaker extraction algorithm seeks to extract the speech of a target speaker from a multi-talker speech mixture when given a cue that represents the target speaker, such as a pre-enrolled speech utterance, or an accompanying video track.

Lip Reading Target Speaker Extraction

Emotional Voice Conversion: Theory, Databases and ESD

1 code implementation31 May 2021 Kun Zhou, Berrak Sisman, Rui Liu, Haizhou Li

In this paper, we first provide a review of the state-of-the-art emotional voice conversion research, and the existing emotional speech databases.

Voice Conversion

The Multi-speaker Multi-style Voice Cloning Challenge 2021

no code implementations5 Apr 2021 Qicong Xie, Xiaohai Tian, Guanghou Liu, Kun Song, Lei Xie, Zhiyong Wu, Hai Li, Song Shi, Haizhou Li, Fen Hong, Hui Bu, Xin Xu

The challenge consists of two tracks, namely few-shot track and one-shot track, where the participants are required to clone multiple target voices with 100 and 5 samples respectively.

Benchmarking Voice Cloning

Limited Data Emotional Voice Conversion Leveraging Text-to-Speech: Two-stage Sequence-to-Sequence Training

2 code implementations31 Mar 2021 Kun Zhou, Berrak Sisman, Haizhou Li

In stage 2, we perform emotion training with a limited amount of emotional speech data, to learn how to disentangle emotional style and linguistic information from the speech.

Voice Conversion

Target Speaker Verification with Selective Auditory Attention for Single and Multi-talker Speech

1 code implementation30 Mar 2021 Chenglin Xu, Wei Rao, Jibin Wu, Haizhou Li

Inspired by the study on target speaker extraction, e. g., SpEx, we propose a unified speaker verification framework for both single- and multi-talker speech, that is able to pay selective auditory attention to the target speaker.

Multi-Task Learning Speaker Verification +1

Leveraging Acoustic and Linguistic Embeddings from Pretrained speech and language Models for Intent Classification

no code implementations15 Feb 2021 Bidisha Sharma, Maulik Madhavi, Haizhou Li

An intent classification system is usually implemented as a pipeline process, with a speech recognition module followed by text processing that classifies the intents.

Classification General Classification +7

VAW-GAN for Disentanglement and Recomposition of Emotional Elements in Speech

no code implementations3 Nov 2020 Kun Zhou, Berrak Sisman, Haizhou Li

Emotional voice conversion (EVC) aims to convert the emotion of speech from one state to another while preserving the linguistic content and speaker identity.

Decoder Disentanglement +2

Seen and Unseen emotional style transfer for voice conversion with a new emotional speech dataset

2 code implementations28 Oct 2020 Kun Zhou, Berrak Sisman, Rui Liu, Haizhou Li

Emotional voice conversion aims to transform emotional prosody in speech while preserving the linguistic content and speaker identity.

Decoder Generative Adversarial Network +3

Deep Convolutional Spiking Neural Networks for Keyword Spotting

no code implementations Interspeech 2020 Emre Yilmaz, Özgür Bora Gevrek, Jibin Wu, Yuxiang Chen, Xuanbo Meng, Haizhou Li

To explore the effectiveness and computational complexity of SNN on KWS and wakeword detection, we compare the performance and computational costs of spiking fully-connected and convolutional neural networks with ANN counterparts under clean and noisy testing conditions.

Keyword Spotting

GraphSpeech: Syntax-Aware Graph Attention Network For Neural Speech Synthesis

no code implementations23 Oct 2020 Rui Liu, Berrak Sisman, Haizhou Li

Attention-based end-to-end text-to-speech synthesis (TTS) is superior to conventional statistical methods in many ways.

Graph Attention Graph Neural Network +3

Muse: Multi-modal target speaker extraction with visual cues

1 code implementation15 Oct 2020 Zexu Pan, Ruijie Tao, Chenglin Xu, Haizhou Li

Speaker extraction algorithm relies on the speech sample from the target speaker as the reference point to focus its attention.

Target Speaker Extraction

Speaker-Utterance Dual Attention for Speaker and Utterance Verification

no code implementations20 Aug 2020 Tianchi Liu, Rohan Kumar Das, Maulik Madhavi, ShengMei Shen, Haizhou Li

The proposed SUDA features an attention mask mechanism to learn the interaction between the speaker and utterance information streams.

Speaker Verification

Modeling Prosodic Phrasing with Multi-Task Learning in Tacotron-based TTS

no code implementations11 Aug 2020 Rui Liu, Berrak Sisman, Feilong Bao, Guanglai Gao, Haizhou Li

We propose a multi-task learning scheme for Tacotron training, that optimizes the system to predict both Mel spectrum and phrase breaks.

Multi-Task Learning Speech Synthesis

Spectrum and Prosody Conversion for Cross-lingual Voice Conversion with CycleGAN

no code implementations11 Aug 2020 Zongyang Du, Kun Zhou, Berrak Sisman, Haizhou Li

It relies on non-parallel training data from two different languages, hence, is more challenging than mono-lingual voice conversion.

Voice Conversion

VAW-GAN for Singing Voice Conversion with Non-parallel Training Data

no code implementations10 Aug 2020 Junchen Lu, Kun Zhou, Berrak Sisman, Haizhou Li

We train an encoder to disentangle singer identity and singing prosody (F0 contour) from phonetic content.

Decoder Generative Adversarial Network +1

Multi-Tones' Phase Coding (MTPC) of Interaural Time Difference by Spiking Neural Network

no code implementations7 Jul 2020 Zihan Pan, Malu Zhang, Jibin Wu, Haizhou Li

Inspired by the mammal's auditory localization pathway, in this paper we propose a pure spiking neural network (SNN) based computational model for precise sound localization in the noisy real-world environment, and implement this algorithm in a real-time robotic system with a microphone array.

Progressive Tandem Learning for Pattern Recognition with Deep Spiking Neural Networks

no code implementations2 Jul 2020 Jibin Wu, Cheng-Lin Xu, Daquan Zhou, Haizhou Li, Kay Chen Tan

In this paper, we propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition, which is referred to as progressive tandem learning of deep SNNs.

Computational Efficiency Image Reconstruction +2

Modeling Code-Switch Languages Using Bilingual Parallel Corpus

no code implementations ACL 2020 Gr Lee, ee, Haizhou Li

A bilingual language model is expected to model the sequential dependency for words across languages, which is difficult due to the inherent lack of suitable training data as well as diverse syntactic structure across languages.

Bilingual Lexicon Induction Language Modelling +1

Converting Anyone's Emotion: Towards Speaker-Independent Emotional Voice Conversion

1 code implementation13 May 2020 Kun Zhou, Berrak Sisman, Mingyang Zhang, Haizhou Li

We consider that there is a common code between speakers for emotional expression in a spoken language, therefore, a speaker-independent mapping between emotional states is possible.

Decoder Voice Conversion

SpEx+: A Complete Time Domain Speaker Extraction Network

no code implementations10 May 2020 Meng Ge, Cheng-Lin Xu, Longbiao Wang, Eng Siong Chng, Jianwu Dang, Haizhou Li

To eliminate such mismatch, we propose a complete time-domain speaker extraction solution, that is called SpEx+.

Speech Extraction Audio and Speech Processing Sound

Time-domain speaker extraction network

no code implementations29 Apr 2020 Cheng-Lin Xu, Wei Rao, Eng Siong Chng, Haizhou Li

The inaccuracy of phase estimation is inherent to the frequency domain processing, that affects the quality of signal reconstruction.

Audio and Speech Processing Sound

SpEx: Multi-Scale Time Domain Speaker Extraction Network

1 code implementation17 Apr 2020 Cheng-Lin Xu, Wei Rao, Eng Siong Chng, Haizhou Li

Inspired by Conv-TasNet, we propose a time-domain speaker extraction network (SpEx) that converts the mixture speech into multi-scale embedding coefficients instead of decomposing the speech signal into magnitude and phase spectra.

Decoder Multi-Task Learning

Rectified Linear Postsynaptic Potential Function for Backpropagation in Deep Spiking Neural Networks

no code implementations26 Mar 2020 Malu Zhang, Jiadong Wang, Burin Amornpaisannon, Zhixuan Zhang, VPK Miriyala, Ammar Belatreche, Hong Qu, Jibin Wu, Yansong Chua, Trevor E. Carlson, Haizhou Li

In STDBP algorithm, the timing of individual spikes is used to convey information (temporal coding), and learning (back-propagation) is performed based on spike timing in an event-driven manner.

Decision Making

WaveTTS: Tacotron-based TTS with Joint Time-Frequency Domain Loss

no code implementations2 Feb 2020 Rui Liu, Berrak Sisman, Feilong Bao, Guanglai Gao, Haizhou Li

To address this problem, we propose a new training scheme for Tacotron-based TTS, referred to as WaveTTS, that has 2 loss functions: 1) time-domain loss, denoted as the waveform loss, that measures the distortion between the natural and generated waveform; and 2) frequency-domain loss, that measures the Mel-scale acoustic feature loss between the natural and generated acoustic features.

Transforming Spectrum and Prosody for Emotional Voice Conversion with Non-Parallel Training Data

1 code implementation1 Feb 2020 Kun Zhou, Berrak Sisman, Haizhou Li

Many studies require parallel speech data between different emotional patterns, which is not practical in real life.

Voice Conversion

Deep Spiking Neural Networks for Large Vocabulary Automatic Speech Recognition

1 code implementation19 Nov 2019 Jibin Wu, Emre Yilmaz, Malu Zhang, Haizhou Li, Kay Chen Tan

The brain-inspired spiking neural networks (SNN) closely mimic the biological neural networks and can operate on low-power neuromorphic hardware with spike-based computation.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Teacher-Student Training for Robust Tacotron-based TTS

no code implementations7 Nov 2019 Rui Liu, Berrak Sisman, Jingdong Li, Feilong Bao, Guanglai Gao, Haizhou Li

We first train a Tacotron2-based TTS model by always providing natural speech frames to the decoder, that serves as a teacher model.

Decoder Knowledge Distillation

End-to-End Code-Switching ASR for Low-Resourced Language Pairs

no code implementations27 Sep 2019 Xianghu Yue, Grandee Lee, Emre Yilmaz, Fang Deng, Haizhou Li

In this work, we describe an E2E ASR pipeline for the recognition of CS speech in which a low-resourced language is mixed with a high resourced language.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

Automatic Lyrics Alignment and Transcription in Polyphonic Music: Does Background Music Help?

no code implementations23 Sep 2019 Chitralekha Gupta, Emre Yilmaz, Haizhou Li

Automatic lyrics alignment and transcription in polyphonic music are challenging tasks because the singing vocals are corrupted by the background music.

Audio and Speech Processing Sound

Neural Population Coding for Effective Temporal Classification

no code implementations12 Sep 2019 Zihan Pan, Jibin Wu, Yansong Chua, Malu Zhang, Haizhou Li

We show that, with population neural codings, the encoded patterns are linearly separable using the Support Vector Machine (SVM).

Classification General Classification

An efficient and perceptually motivated auditory neural encoding and decoding algorithm for spiking neural networks

no code implementations3 Sep 2019 Zihan Pan, Yansong Chua, Jibin Wu, Malu Zhang, Haizhou Li, Eliathamby Ambikairajah

The neural encoding scheme, that we call Biologically plausible Auditory Encoding (BAE), emulates the functions of the perceptual components of the human auditory system, that include the cochlear filter bank, the inner hair cells, auditory masking effects from psychoacoustic models, and the spike neural encoding by the auditory nerve.

Benchmarking speech-recognition +1

A Tandem Learning Rule for Effective Training and Rapid Inference of Deep Spiking Neural Networks

1 code implementation2 Jul 2019 Jibin Wu, Yansong Chua, Malu Zhang, Guoqi Li, Haizhou Li, Kay Chen Tan

Spiking neural networks (SNNs) represent the most prominent biologically inspired computing model for neuromorphic computing (NC) architectures.

Event-based vision

Acoustic Modeling for Automatic Lyrics-to-Audio Alignment

no code implementations25 Jun 2019 Chitralekha Gupta, Emre Yilmaz, Haizhou Li

In this work, we propose (1) using additional speech and music-informed features and (2) adapting the acoustic models trained on a large amount of solo singing vocals towards polyphonic music using a small amount of in-domain data.

Large-Scale Speaker Diarization of Radio Broadcast Archives

no code implementations19 Jun 2019 Emre Yilmaz, Adem Derinel, Zhou Kun, Henk van den Heuvel, Niko Brummer, Haizhou Li, David A. van Leeuwen

This paper describes our initial efforts to build a large-scale speaker diarization (SD) and identification system on a recently digitized radio broadcast archive from the Netherlands which has more than 6500 audio tapes with 3000 hours of Frisian-Dutch speech recorded between 1950-2016.

speaker-diarization Speaker Diarization +1

Multi-Graph Decoding for Code-Switching ASR

no code implementations18 Jun 2019 Emre Yilmaz, Samuel Cohen, Xianghu Yue, David van Leeuwen, Haizhou Li

This archive contains recordings with monolingual Frisian and Dutch speech segments as well as Frisian-Dutch CS speech, hence the recognition performance on monolingual segments is also vital for accurate transcriptions.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

VQVAE Unsupervised Unit Discovery and Multi-scale Code2Spec Inverter for Zerospeech Challenge 2019

no code implementations27 May 2019 Andros Tjandra, Berrak Sisman, Mingyang Zhang, Sakriani Sakti, Haizhou Li, Satoshi Nakamura

Our proposed approach significantly improved the intelligibility (in CER), the MOS, and discrimination ABX scores compared to the official ZeroSpeech 2019 baseline or even the topline.

Clustering

Joint training framework for text-to-speech and voice conversion using multi-source Tacotron and WaveNet

no code implementations29 Mar 2019 Mingyang Zhang, Xin Wang, Fuming Fang, Haizhou Li, Junichi Yamagishi

We propose using an extended model architecture of Tacotron, that is a multi-source sequence-to-sequence model with a dual attention mechanism as the shared model for both the TTS and VC tasks.

Decoder Speech Synthesis +1