Search Results for author: Zhiyong Wu

Found 92 papers, 35 papers with code

TrimTail: Low-Latency Streaming ASR with Simple but Effective Spectrogram-Level Length Penalty

1 code implementation1 Nov 2022 Xingchen Song, Di wu, Zhiyong Wu, BinBin Zhang, Yuekai Zhang, Zhendong Peng, Wenpeng Li, Fuping Pan, Changbao Zhu

In this paper, we present TrimTail, a simple but effective emission regularization method to improve the latency of streaming ASR models.

OS-Copilot: Towards Generalist Computer Agents with Self-Improvement

1 code implementation12 Feb 2024 Zhiyong Wu, Chengcheng Han, Zichen Ding, Zhenmin Weng, Zhoumianze Liu, Shunyu Yao, Tao Yu, Lingpeng Kong

Autonomous interaction with the computer has been a longstanding challenge with great potential, and the recent proliferation of large language models (LLMs) has markedly accelerated progress in building digital agents.

A Survey on In-context Learning

1 code implementation31 Dec 2022 Qingxiu Dong, Damai Dai, Ce Zheng, Zhiyong Wu, Baobao Chang, Xu sun, Jingjing Xu, Lei LI, Zhifang Sui

With the increasing ability of large language models (LLMs), in-context learning (ICL) has become a new paradigm for natural language processing (NLP), where LLMs make predictions only based on contexts augmented with a few examples.

In-Context Learning

Walking with Perception: Efficient Random Walk Sampling via Common Neighbor Awareness

1 code implementation ‏‏‎ ‎ 2020 Yongkun Li, Zhiyong Wu, Shuai Lin, Hong Xie, Min Lv, Yinlong Xu, John C. S. Lui

Random walk is widely applied to sample large-scale graphs due to its simplicity of implementation and solid theoretical foundations of bias analysis.

Computational Efficiency

DiffuSeq: Sequence to Sequence Text Generation with Diffusion Models

1 code implementation17 Oct 2022 Shansan Gong, Mukai Li, Jiangtao Feng, Zhiyong Wu, Lingpeng Kong

Bringing together theoretical analysis and empirical evidence, we demonstrate the great potential of diffusion models in complex conditional language generation tasks.

Text Generation

OpenICL: An Open-Source Framework for In-context Learning

3 code implementations6 Mar 2023 Zhenyu Wu, Yaoxiang Wang, Jiacheng Ye, Jiangtao Feng, Jingjing Xu, Yu Qiao, Zhiyong Wu

However, the implementation of ICL is sophisticated due to the diverse retrieval and inference methods involved, as well as the varying pre-processing requirements for different models, datasets, and tasks.

In-Context Learning Language Modelling +4

Enhancing Speaking Styles in Conversational Text-to-Speech Synthesis with Graph-based Multi-modal Context Modeling

2 code implementations11 Jun 2021 Jingbei Li, Yi Meng, Chenyi Li, Zhiyong Wu, Helen Meng, Chao Weng, Dan Su

However, state-of-the-art context modeling methods in conversational TTS only model the textual information in context with a recurrent neural network (RNN).

Speech Synthesis Text-To-Speech Synthesis

FullSubNet+: Channel Attention FullSubNet with Complex Spectrograms for Speech Enhancement

2 code implementations23 Mar 2022 Jun Chen, Zilin Wang, Deyi Tuo, Zhiyong Wu, Shiyin Kang, Helen Meng

Previously proposed FullSubNet has achieved outstanding performance in Deep Noise Suppression (DNS) Challenge and attracted much attention.

Speech Enhancement

The DiffuseStyleGesture+ entry to the GENEA Challenge 2023

1 code implementation26 Aug 2023 Sicheng Yang, Haiwei Xue, Zhensong Zhang, Minglei Li, Zhiyong Wu, Xiaofei Wu, Songcen Xu, Zonghong Dai

In this paper, we introduce the DiffuseStyleGesture+, our solution for the Generation and Evaluation of Non-verbal Behavior for Embodied Agents (GENEA) Challenge 2023, which aims to foster the development of realistic, automated systems for generating conversational gestures.

A Survey of Neural Code Intelligence: Paradigms, Advances and Beyond

1 code implementation21 Mar 2024 Qiushi Sun, Zhirui Chen, Fangzhi Xu, Kanzhi Cheng, Chang Ma, Zhangyue Yin, Jianing Wang, Chengcheng Han, Renyu Zhu, Shuai Yuan, Qipeng Guo, Xipeng Qiu, Pengcheng Yin, XiaoLi Li, Fei Yuan, Lingpeng Kong, Xiang Li, Zhiyong Wu

Building on our examination of the developmental trajectories, we further investigate the emerging synergies between code intelligence and broader machine intelligence, uncovering new cross-domain opportunities and illustrating the substantial influence of code intelligence across various domains.

Perturbed Masking: Parameter-free Probing for Analyzing and Interpreting BERT

1 code implementation ACL 2020 Zhiyong Wu, Yun Chen, Ben Kao, Qun Liu

However, this approach of evaluating a language model is undermined by the uncertainty of the amount of knowledge that is learned by the probe itself.

Dependency Parsing Language Modelling +2

EMO: Earth Mover Distance Optimization for Auto-Regressive Language Modeling

1 code implementation7 Oct 2023 Siyu Ren, Zhiyong Wu, Kenny Q. Zhu

In this paper, we propose Earth Mover Distance Optimization (EMO) for auto-regressive language modeling.

Language Modelling

SeeClick: Harnessing GUI Grounding for Advanced Visual GUI Agents

1 code implementation17 Jan 2024 Kanzhi Cheng, Qiushi Sun, Yougang Chu, Fangzhi Xu, Yantao Li, Jianbing Zhang, Zhiyong Wu

In our preliminary study, we have discovered a key challenge in developing visual GUI agents: GUI grounding -- the capacity to accurately locate screen elements based on instructions.

Compositional Exemplars for In-context Learning

1 code implementation11 Feb 2023 Jiacheng Ye, Zhiyong Wu, Jiangtao Feng, Tao Yu, Lingpeng Kong

The performance of ICL is highly dominated by the quality of the selected in-context examples.

Code Generation Contrastive Learning +6

QPGesture: Quantization-Based and Phase-Guided Motion Matching for Natural Speech-Driven Gesture Generation

1 code implementation CVPR 2023 Sicheng Yang, Zhiyong Wu, Minglei Li, Zhensong Zhang, Lei Hao, Weihong Bao, Haolin Zhuang

Levenshtein distance based on audio quantization as a similarity metric of corresponding speech of gestures helps match more appropriate gestures with speech, and solves the alignment problem of speech and gestures well.

Gesture Generation Quantization

An End-to-end Chinese Text Normalization Model based on Rule-guided Flat-Lattice Transformer

1 code implementation31 Mar 2022 Wenlin Dai, Changhe Song, Xiang Li, Zhiyong Wu, Huashan Pan, Xiulin Li, Helen Meng

Inspired by Flat-LAttice Transformer (FLAT), we propose an end-to-end Chinese text normalization model, which accepts Chinese characters as direct input and integrates expert knowledge contained in rules into the neural network, both contribute to the superior performance of proposed model for the text normalization task.

A Character-level Span-based Model for Mandarin Prosodic Structure Prediction

1 code implementation31 Mar 2022 Xueyuan Chen, Changhe Song, Yixuan Zhou, Zhiyong Wu, Changbin Chen, Zhongqin Wu, Helen Meng

In this paper, we propose a span-based Mandarin prosodic structure prediction model to obtain an optimal prosodic structure tree, which can be converted to corresponding prosodic label sequence.

Sentence

Self-Adaptive In-Context Learning: An Information Compression Perspective for In-Context Example Selection and Ordering

1 code implementation20 Dec 2022 Zhiyong Wu, Yaoxiang Wang, Jiacheng Ye, Lingpeng Kong

Despite the surprising few-shot performance of in-context learning (ICL), it is still a common practice to randomly sample examples to serve as context.

In-Context Learning

Can We Edit Factual Knowledge by In-Context Learning?

2 code implementations22 May 2023 Ce Zheng, Lei LI, Qingxiu Dong, Yuxuan Fan, Zhiyong Wu, Jingjing Xu, Baobao Chang

Inspired by in-context learning (ICL), a new paradigm based on demonstration contexts without parameter updating, we explore whether ICL can edit factual knowledge.

In-Context Learning knowledge editing

COLO: A Contrastive Learning based Re-ranking Framework for One-Stage Summarization

1 code implementation COLING 2022 Chenxin An, Ming Zhong, Zhiyong Wu, Qin Zhu, Xuanjing Huang, Xipeng Qiu

Traditional training paradigms for extractive and abstractive summarization systems always only use token-level or sentence-level training objectives.

Abstractive Text Summarization Contrastive Learning +2

ZeroGen: Efficient Zero-shot Learning via Dataset Generation

3 code implementations16 Feb 2022 Jiacheng Ye, Jiahui Gao, Qintong Li, Hang Xu, Jiangtao Feng, Zhiyong Wu, Tao Yu, Lingpeng Kong

There is a growing interest in dataset generation recently due to the superior generative capacity of large pre-trained language models (PLMs).

Knowledge Distillation Natural Language Inference +5

Self-Guided Noise-Free Data Generation for Efficient Zero-Shot Learning

2 code implementations25 May 2022 Jiahui Gao, Renjie Pi, Yong Lin, Hang Xu, Jiacheng Ye, Zhiyong Wu, Weizhong Zhang, Xiaodan Liang, Zhenguo Li, Lingpeng Kong

In this paradigm, the synthesized data from the PLM acts as the carrier of knowledge, which is used to train a task-specific model with orders of magnitude fewer parameters than the PLM, achieving both higher performance and efficiency than prompt-based zero-shot learning methods on PLMs.

text-classification Text Classification +1

ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback

2 code implementations22 Oct 2022 Jiacheng Ye, Jiahui Gao, Jiangtao Feng, Zhiyong Wu, Tao Yu, Lingpeng Kong

To improve the quality of dataset synthesis, we propose a progressive zero-shot dataset generation framework, ProGen, which leverages the feedback from the task-specific model to guide the generation of new training data via in-context examples.

Informativeness text-classification +2

In-Context Learning with Many Demonstration Examples

1 code implementation9 Feb 2023 Mukai Li, Shansan Gong, Jiangtao Feng, Yiheng Xu, Jun Zhang, Zhiyong Wu, Lingpeng Kong

Based on EVALM, we scale up the size of examples efficiently in both instruction tuning and in-context learning to explore the boundary of the benefits from more annotated data.

In-Context Learning Language Modelling

The ReprGesture entry to the GENEA Challenge 2022

1 code implementation25 Aug 2022 Sicheng Yang, Zhiyong Wu, Minglei Li, Mengchen Zhao, Jiuxin Lin, Liyang Chen, Weihong Bao

This paper describes the ReprGesture entry to the Generation and Evaluation of Non-verbal Behaviour for Embodied Agents (GENEA) challenge 2022.

Gesture Generation Representation Learning

Corex: Pushing the Boundaries of Complex Reasoning through Multi-Model Collaboration

1 code implementation30 Sep 2023 Qiushi Sun, Zhangyue Yin, Xiang Li, Zhiyong Wu, Xipeng Qiu, Lingpeng Kong

Large Language Models (LLMs) are evolving at an unprecedented pace and have exhibited considerable capability in the realm of natural language processing (NLP) with world knowledge.

World Knowledge

Cascaded Head-colliding Attention

1 code implementation ACL 2021 Lin Zheng, Zhiyong Wu, Lingpeng Kong

Transformers have advanced the field of natural language processing (NLP) on a variety of important tasks.

Language Modelling Machine Translation +1

CB-Conformer: Contextual biasing Conformer for biased word recognition

1 code implementation19 Apr 2023 Yaoxun Xu, Baiji Liu, Qiaochu Huang and, Xingchen Song, Zhiyong Wu, Shiyin Kang, Helen Meng

In this work, we propose CB-Conformer to improve biased word recognition by introducing the Contextual Biasing Module and the Self-Adaptive Language Model to vanilla Conformer.

Automatic Speech Recognition Language Modelling +2

Voting for the right answer: Adversarial defense for speaker verification

1 code implementation15 Jun 2021 Haibin Wu, Yang Zhang, Zhiyong Wu, Dong Wang, Hung-Yi Lee

Automatic speaker verification (ASV) is a well developed technology for biometric identification, and has been ubiquitous implemented in security-critic applications, such as banking and access control.

Adversarial Defense Speaker Verification

Lexical Knowledge Internalization for Neural Dialog Generation

1 code implementation ACL 2022 Zhiyong Wu, Wei Bi, Xiang Li, Lingpeng Kong, Ben Kao

We propose knowledge internalization (KI), which aims to complement the lexical knowledge into neural dialog models.

Contrastive Learning

Explanation Regeneration via Information Bottleneck

1 code implementation19 Dec 2022 Qintong Li, Zhiyong Wu, Lingpeng Kong, Wei Bi

Explaining the black-box predictions of NLP models naturally and accurately is an important open problem in natural language generation.

Explanation Generation Language Modelling +2

Feature Learning with Gaussian Restricted Boltzmann Machine for Robust Speech Recognition

no code implementations23 Sep 2013 Xin Zheng, Zhiyong Wu, Helen Meng, Weifeng Li, Lianhong Cai

In this paper, we first present a new variant of Gaussian restricted Boltzmann machine (GRBM) called multivariate Gaussian restricted Boltzmann machine (MGRBM), with its definition and learning algorithm.

Robust Speech Recognition speech-recognition

NEXT: A Neural Network Framework for Next POI Recommendation

no code implementations15 Apr 2017 Zhiqian Zhang, Chenliang Li, Zhiyong Wu, Aixin Sun, Dengpan Ye, Xiangyang Luo

Inspired by the recent success of neural networks in many areas, in this paper, we present a simple but effective neural network framework for next POI recommendation, named NEXT.

Representation Learning

Noise Robust TTS for Low Resource Speakers using Pre-trained Model and Speech Enhancement

no code implementations26 May 2020 Dongyang Dai, Li Chen, Yu-Ping Wang, Mu Wang, Rui Xia, Xuchen Song, Zhiyong Wu, Yuxuan Wang

Firstly, the speech synthesis model is pre-trained with both multi-speaker clean data and noisy augmented data; then the pre-trained model is adapted on noisy low-resource new speaker data; finally, by setting the clean speech condition, the model can synthesize the new speaker's clean voice.

Speech Enhancement Speech Synthesis

Speaker Independent and Multilingual/Mixlingual Speech-Driven Talking Head Generation Using Phonetic Posteriorgrams

no code implementations20 Jun 2020 Huirong Huang, Zhiyong Wu, Shiyin Kang, Dongyang Dai, Jia Jia, Tianxiao Fu, Deyi Tuo, Guangzhi Lei, Peng Liu, Dan Su, Dong Yu, Helen Meng

Recent approaches mainly have following limitations: 1) most speaker-independent methods need handcrafted features that are time-consuming to design or unreliable; 2) there is no convincing method to support multilingual or mixlingual speech as input.

Talking Head Generation

Good for Misconceived Reasons: Revisiting Neural Multimodal Machine Translation

no code implementations1 Jan 2021 Zhiyong Wu, Lingpeng Kong, Ben Kao

A neural multimodal machine translation (MMT) system is one that aims to perform better translation by extending conventional text-only translation models with multimodal information.

Multimodal Machine Translation Translation

Improving pronunciation assessment via ordinal regression with anchored reference samples

no code implementations26 Oct 2020 Bin Su, Shaoguang Mao, Frank Soong, Yan Xia, Jonathan Tien, Zhiyong Wu

Traditional speech pronunciation assessment, based on the Goodness of Pronunciation (GOP) algorithm, has some weakness in assessing a speech utterance: 1) Phoneme GOP scores cannot be easily translated into a sentence score with a simple average for effective assessment; 2) The rank ordering information has not been well exploited in GOP scoring for delivering a robust assessment and correlate well with a human rater's evaluations.

regression Sentence

Non-Autoregressive Transformer ASR with CTC-Enhanced Decoder Input

no code implementations28 Oct 2020 Xingchen Song, Zhiyong Wu, Yiheng Huang, Chao Weng, Dan Su, Helen Meng

Non-autoregressive (NAR) transformer models have achieved significantly inference speedup but at the cost of inferior accuracy compared to autoregressive (AR) models in automatic speech recognition (ASR).

Automatic Speech Recognition Automatic Speech Recognition (ASR) +1

Syntactic representation learning for neural network based TTS with syntactic parse tree traversal

no code implementations13 Dec 2020 Changhe Song, Jingbei Li, Yixuan Zhou, Zhiyong Wu, Helen Meng

Meanwhile, nuclear-norm maximization loss is introduced to enhance the discriminability and diversity of the embeddings of constituent labels.

Representation Learning Sentence

Unsupervised Cross-Lingual Speech Emotion Recognition Using DomainAdversarial Neural Network

no code implementations21 Dec 2020 Xiong Cai, Zhiyong Wu, Kuo Zhong, Bin Su, Dongyang Dai, Helen Meng

By using deep learning approaches, Speech Emotion Recog-nition (SER) on a single domain has achieved many excellentresults.

Speech Emotion Recognition

The Multi-speaker Multi-style Voice Cloning Challenge 2021

no code implementations5 Apr 2021 Qicong Xie, Xiaohai Tian, Guanghou Liu, Kun Song, Lei Xie, Zhiyong Wu, Hai Li, Song Shi, Haizhou Li, Fen Hong, Hui Bu, Xin Xu

The challenge consists of two tracks, namely few-shot track and one-shot track, where the participants are required to clone multiple target voices with 100 and 5 samples respectively.

Benchmarking Voice Cloning

Towards Multi-Scale Style Control for Expressive Speech Synthesis

no code implementations8 Apr 2021 Xiang Li, Changhe Song, Jingbei Li, Zhiyong Wu, Jia Jia, Helen Meng

This paper introduces a multi-scale speech style modeling method for end-to-end expressive speech synthesis.

Expressive Speech Synthesis Style Transfer

Adversarially learning disentangled speech representations for robust multi-factor voice conversion

no code implementations30 Jan 2021 Jie Wang, Jingbei Li, Xintao Zhao, Zhiyong Wu, Shiyin Kang, Helen Meng

To increase the robustness of highly controllable style transfer on multiple factors in VC, we propose a disentangled speech representation learning framework based on adversarial learning.

Representation Learning Style Transfer +1

Good for Misconceived Reasons: An Empirical Revisiting on the Need for Visual Context in Multimodal Machine Translation

no code implementations ACL 2021 Zhiyong Wu, Lingpeng Kong, Wei Bi, Xiang Li, Ben Kao

A neural multimodal machine translation (MMT) system is one that aims to perform better translation by extending conventional text-only translation models with multimodal information.

Multimodal Machine Translation Translation

FERNet: Fine-grained Extraction and Reasoning Network for Emotion Recognition in Dialogues

no code implementations Asian Chapter of the Association for Computational Linguistics 2020 YingMei Guo, Zhiyong Wu, Mingxing Xu

Unlike non-conversation scenes, emotion recognition in dialogues (ERD) poses more complicated challenges due to its interactive nature and intricate contextual information.

Emotion Recognition

Learning from Multiple Noisy Augmented Data Sets for Better Cross-Lingual Spoken Language Understanding

no code implementations EMNLP 2021 YingMei Guo, Linjun Shou, Jian Pei, Ming Gong, Mingxing Xu, Zhiyong Wu, Daxin Jiang

Although various data augmentation approaches have been proposed to synthesize training data in low-resource target languages, the augmented data sets are often noisy, and thus impede the performance of SLU models.

Data Augmentation Denoising +1

An Approach to Mispronunciation Detection and Diagnosis with Acoustic, Phonetic and Linguistic (APL) Embeddings

no code implementations14 Oct 2021 Wenxuan Ye, Shaoguang Mao, Frank Soong, Wenshan Wu, Yan Xia, Jonathan Tien, Zhiyong Wu

These embeddings, when used as implicit phonetic supplementary information, can alleviate the data shortage of explicit phoneme annotations.

Tackling Spoofing-Aware Speaker Verification with Multi-Model Fusion

no code implementations18 Jun 2022 Haibin Wu, Jiawen Kang, Lingwei Meng, Yang Zhang, Xixin Wu, Zhiyong Wu, Hung-Yi Lee, Helen Meng

However, previous works show that state-of-the-art ASV models are seriously vulnerable to voice spoofing attacks, and the recently proposed high-performance spoofing countermeasure (CM) models only focus solely on the standalone anti-spoofing tasks, and ignore the subsequent speaker verification process.

Open-Ended Question Answering Speaker Verification

Ordinal Regression via Binary Preference vs Simple Regression: Statistical and Experimental Perspectives

no code implementations6 Jul 2022 Bin Su, Shaoguang Mao, Frank Soong, Zhiyong Wu

The ORARS addresses the MOS prediction problem by pairing a test sample with each of the pre-scored anchored reference samples.

regression

Towards Cross-speaker Reading Style Transfer on Audiobook Dataset

no code implementations10 Aug 2022 Xiang Li, Changhe Song, Xianhao Wei, Zhiyong Wu, Jia Jia, Helen Meng

This paper aims to introduce a chunk-wise multi-scale cross-speaker style model to capture both the global genre and the local prosody in audiobook speeches.

Style Transfer

Unsupervised Multi-scale Expressive Speaking Style Modeling with Hierarchical Context Information for Audiobook Speech Synthesis

no code implementations COLING 2022 Xueyuan Chen, Shun Lei, Zhiyong Wu, Dong Xu, Weifeng Zhao, Helen Meng

On top of these, a bi-reference attention mechanism is used to align both local-scale reference style embedding sequence and local-scale context style embedding sequence with corresponding phoneme embedding sequence.

Speech Synthesis

Disentangled Speech Representation Learning for One-Shot Cross-lingual Voice Conversion Using $β$-VAE

no code implementations25 Oct 2022 Hui Lu, Disong Wang, Xixin Wu, Zhiyong Wu, Xunying Liu, Helen Meng

We propose an unsupervised learning method to disentangle speech into content representation and speaker identity representation.

Disentanglement Voice Conversion

Unsupervised Explanation Generation via Correct Instantiations

no code implementations21 Nov 2022 Sijie Cheng, Zhiyong Wu, Jiangjie Chen, Zhixing Li, Yang Liu, Lingpeng Kong

The major difficulty is finding the conflict point, where the statement contradicts our real world.

Explanation Generation

Lexicon-injected Semantic Parsing for Task-Oriented Dialog

no code implementations26 Nov 2022 Xiaojun Meng, Wenlin Dai, Yasheng Wang, Baojun Wang, Zhiyong Wu, Xin Jiang, Qun Liu

Then we present a novel lexicon-injected semantic parser, which collects slot labels of tree representation as a lexicon, and injects lexical features to the span representation of parser.

Semantic Parsing

Adversarial Speaker Disentanglement Using Unannotated External Data for Self-supervised Representation Based Voice Conversion

no code implementations16 May 2023 Xintao Zhao, Shuai Wang, Yang Chao, Zhiyong Wu, Helen Meng

Experimental results show that our proposed method achieves comparable similarity and higher naturalness than the supervised method, which needs a huge amount of annotated corpora for training and is applicable to improve similarity for VC methods with other SSL representations as input.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +4

ZeroPrompt: Streaming Acoustic Encoders are Zero-Shot Masked LMs

no code implementations18 May 2023 Xingchen Song, Di wu, BinBin Zhang, Zhendong Peng, Bo Dang, Fuping Pan, Zhiyong Wu

In this paper, we present ZeroPrompt (Figure 1-(a)) and the corresponding Prompt-and-Refine strategy (Figure 3), two simple but effective \textbf{training-free} methods to decrease the Token Display Time (TDT) of streaming ASR models \textbf{without any accuracy loss}.

VAST: Vivify Your Talking Avatar via Zero-Shot Expressive Facial Style Transfer

no code implementations9 Aug 2023 Liyang Chen, Zhiyong Wu, Runnan Li, Weihong Bao, Jun Ling, Xu Tan, Sheng Zhao

With our essential designs on facial style learning, our model is able to flexibly capture the expressive facial style from arbitrary video prompts and transfer it onto a personalized image renderer in a zero-shot manner.

Style Transfer Talking Face Generation

Towards Improving the Expressiveness of Singing Voice Synthesis with BERT Derived Semantic Information

no code implementations31 Aug 2023 Shaohuan Zhou, Shun Lei, Weiya You, Deyi Tuo, Yuren You, Zhiyong Wu, Shiyin Kang, Helen Meng

This paper presents an end-to-end high-quality singing voice synthesis (SVS) system that uses bidirectional encoder representation from Transformers (BERT) derived semantic embeddings to improve the expressiveness of the synthesized singing voice.

Singing Voice Synthesis

Improving Mandarin Prosodic Structure Prediction with Multi-level Contextual Information

no code implementations31 Aug 2023 Jie Chen, Changhe Song, Deyi Tuo, Xixin Wu, Shiyin Kang, Zhiyong Wu, Helen Meng

For text-to-speech (TTS) synthesis, prosodic structure prediction (PSP) plays an important role in producing natural and intelligible speech.

Multi-Task Learning

Enhancing the vocal range of single-speaker singing voice synthesis with melody-unsupervised pre-training

no code implementations1 Sep 2023 Shaohuan Zhou, Xu Li, Zhiyong Wu, Ying Shan, Helen Meng

Specifically, in the pre-training step, we design a phoneme predictor to produce the frame-level phoneme probability vectors as the phonemic timing information and a speaker encoder to model the timbre variations of different singers, and directly estimate the frame-level f0 values from the audio to provide the pitch information.

Singing Voice Synthesis Unsupervised Pre-training

Text-Only Domain Adaptation for End-to-End Speech Recognition through Down-Sampling Acoustic Representation

no code implementations4 Sep 2023 Jiaxu Zhu, Weinan Tong, Yaoxun Xu, Changhe Song, Zhiyong Wu, Zhao You, Dan Su, Dong Yu, Helen Meng

Mapping two modalities, speech and text, into a shared representation space, is a research topic of using text-only data to improve end-to-end automatic speech recognition (ASR) performance in new domains.

Automatic Speech Recognition Automatic Speech Recognition (ASR) +2

SnakeGAN: A Universal Vocoder Leveraging DDSP Prior Knowledge and Periodic Inductive Bias

no code implementations14 Sep 2023 Sipan Li, Songxiang Liu, Luwen Zhang, Xiang Li, Yanyao Bian, Chao Weng, Zhiyong Wu, Helen Meng

However, it is still challenging to train a universal vocoder which can generalize well to out-of-domain (OOD) scenarios, such as unseen speaking styles, non-speech vocalization, singing, and musical pieces.

Audio Synthesis Generative Adversarial Network +1

A Discourse-level Multi-scale Prosodic Model for Fine-grained Emotion Analysis

no code implementations21 Sep 2023 Xianhao Wei, Jia Jia, Xiang Li, Zhiyong Wu, Ziyi Wang

More interestingly, although we aim at the synthesis effect of the style transfer model, the synthesized speech by the proposed text prosodic analysis model is even better than the style transfer from the original speech in some user evaluation indicators.

Emotion Recognition Speech Synthesis +1

AdaMesh: Personalized Facial Expressions and Head Poses for Adaptive Speech-Driven 3D Facial Animation

no code implementations11 Oct 2023 Liyang Chen, Weihong Bao, Shun Lei, Boshi Tang, Zhiyong Wu, Shiyin Kang, HaoZhi Huang

Existing works mostly neglect the person-specific talking style in generation, including facial expression and head pose styles.

DASA: Difficulty-Aware Semantic Augmentation for Speaker Verification

no code implementations18 Oct 2023 Yuanyuan Wang, Yang Zhang, Zhiyong Wu, Zhihan Yang, Tao Wei, Kun Zou, Helen Meng

Existing augmentation methods for speaker verification manipulate the raw signal, which are time-consuming and the augmented samples lack diversity.

Data Augmentation Speaker Verification

How Multilingual is Multilingual LLM?

no code implementations15 Nov 2023 Fei Yuan, Shuai Yuan, Zhiyong Wu, Lei LI

Large Language Models (LLMs), trained predominantly on extensive English data, often exhibit limitations when applied to other languages.

Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models

no code implementations15 Nov 2023 Fangzhi Xu, Zhiyong Wu, Qiushi Sun, Siyu Ren, Fei Yuan, Shuai Yuan, Qika Lin, Yu Qiao, Jun Liu

Although Large Language Models (LLMs) demonstrate remarkable ability in processing and generating human-like text, they do have limitations when it comes to comprehending and expressing world knowledge that extends beyond the boundaries of natural language(e. g., chemical molecular formula).

World Knowledge

Stable Score Distillation for High-Quality 3D Generation

no code implementations14 Dec 2023 Boshi Tang, Jianan Wang, Zhiyong Wu, Lei Zhang

Although Score Distillation Sampling (SDS) has exhibited remarkable performance in conditional 3D content generation, a comprehensive understanding of its formulation is still lacking, hindering the development of 3D generation.

Explore 3D Dance Generation via Reward Model from Automatically-Ranked Demonstrations

no code implementations18 Dec 2023 Zilin Wang, Haolin Zhuang, Lu Li, Yinmin Zhang, Junjie Zhong, Jun Chen, Yu Yang, Boshi Tang, Zhiyong Wu

This paper presents an Exploratory 3D Dance generation framework, E3D2, designed to address the exploration capability deficiency in existing music-conditioned 3D dance generation models.

StyleSpeech: Self-supervised Style Enhancing with VQ-VAE-based Pre-training for Expressive Audiobook Speech Synthesis

no code implementations19 Dec 2023 Xueyuan Chen, Xi Wang, Shaofei Zhang, Lei He, Zhiyong Wu, Xixin Wu, Helen Meng

Both objective and subjective evaluations demonstrate that our proposed method can effectively improve the naturalness and expressiveness of the synthesized speech in audiobook synthesis especially for the role and out-of-domain scenarios.

Speech Synthesis

SimCalib: Graph Neural Network Calibration based on Similarity between Nodes

no code implementations19 Dec 2023 Boshi Tang, Zhiyong Wu, Xixin Wu, Qiaochu Huang, Jun Chen, Shun Lei, Helen Meng

A novel calibration framework, named SimCalib, is accordingly proposed to consider similarity between nodes at global and local levels.

Consistent and Relevant: Rethink the Query Embedding in General Sound Separation

no code implementations24 Dec 2023 Yuanyuan Wang, Hangting Chen, Dongchao Yang, Jianwei Yu, Chao Weng, Zhiyong Wu, Helen Meng

In this paper, we present CaRE-SEP, a consistent and relevant embedding network for general sound separation to encourage a comprehensive reconsideration of query usage in audio separation.

Freetalker: Controllable Speech and Text-Driven Gesture Generation Based on Diffusion Models for Enhanced Speaker Naturalness

no code implementations7 Jan 2024 Sicheng Yang, Zunnan Xu, Haiwei Xue, Yongkang Cheng, Shaoli Huang, Mingming Gong, Zhiyong Wu

To tackle these issues, we introduce FreeTalker, which, to the best of our knowledge, is the first framework for the generation of both spontaneous (e. g., co-speech gesture) and non-spontaneous (e. g., moving around the podium) speaker motions.

Gesture Generation

Multi-view MidiVAE: Fusing Track- and Bar-view Representations for Long Multi-track Symbolic Music Generation

no code implementations15 Jan 2024 Zhiwei Lin, Jun Chen, Boshi Tang, Binzhu Sha, Jing Yang, Yaolong Ju, Fan Fan, Shiyin Kang, Zhiyong Wu, Helen Meng

Variational Autoencoders (VAEs) constitute a crucial component of neural symbolic music generation, among which some works have yielded outstanding results and attracted considerable attention.

Music Generation

SCNet: Sparse Compression Network for Music Source Separation

no code implementations24 Jan 2024 Weinan Tong, Jiaxu Zhu, Jun Chen, Shiyin Kang, Tao Jiang, Yang Li, Zhiyong Wu, Helen Meng

We use a higher compression ratio on subbands with less information to improve the information density and focus on modeling subbands with more information.

Music Source Separation

TDAG: A Multi-Agent Framework based on Dynamic Task Decomposition and Agent Generation

no code implementations15 Feb 2024 Yaoxiang Wang, Zhiyong Wu, Junfeng Yao, Jinsong Su

The emergence of Large Language Models (LLMs) like ChatGPT has inspired the development of LLM-based agents capable of addressing complex, real-world tasks.

LLM as Prompter: Low-resource Inductive Reasoning on Arbitrary Knowledge Graphs

no code implementations19 Feb 2024 Kai Wang, Yuwei Xu, Zhiyong Wu, Siqiang Luo

Knowledge Graph (KG) inductive reasoning, which aims to infer missing facts from new KGs that are not seen during training, has been widely adopted in various applications.

Knowledge Graphs

Cannot find the paper you are looking for? You can Submit a new open access paper.