Search Results for author: Zhiqing Hong

Found 11 papers, 5 papers with code

Variational Language Concepts for Interpreting Foundation Language Models

1 code implementation4 Oct 2024 Hengyi Wang, Shiwei Tan, Zhiqing Hong, Desheng Zhang, Hao Wang

Foundation Language Models (FLMs) such as BERT and its variants have achieved remarkable success in natural language processing.

GTSinger: A Global Multi-Technique Singing Corpus with Realistic Music Scores for All Singing Tasks

2 code implementations20 Sep 2024 Yu Zhang, Changhao Pan, Wenxiang Guo, RuiQi Li, Zhiyuan Zhu, Jialei Wang, Wenhao Xu, Jingyu Lu, Zhiqing Hong, Chuxin Wang, Lichao Zhang, Jinzheng He, Ziyue Jiang, Yuxin Chen, Chen Yang, Jiecheng Zhou, Xinyu Cheng, Zhou Zhao

The scarcity of high-quality and multi-task singing datasets significantly hinders the development of diverse controllable and personalized singing tasks, as existing singing datasets suffer from low quality, limited diversity of languages and singers, absence of multi-technique information and realistic music scores, and poor task suitability.

Singing Voice Synthesis Style Transfer +1

Self-Supervised Singing Voice Pre-Training towards Speech-to-Singing Conversion

no code implementations4 Jun 2024 RuiQi Li, Rongjie Huang, Yongqi Wang, Zhiqing Hong, Zhou Zhao

We adopt discrete-unit random resampling and pitch corruption strategies, enabling training with unpaired singing data and thus mitigating the issue of data scarcity.

In-Context Learning Language Modeling +4

Robust Singing Voice Transcription Serves Synthesis

no code implementations16 May 2024 RuiQi Li, Yu Zhang, Yongqi Wang, Zhiqing Hong, Rongjie Huang, Zhou Zhao

Note-level Automatic Singing Voice Transcription (AST) converts singing recordings into note sequences, facilitating the automatic annotation of singing datasets for Singing Voice Synthesis (SVS) applications.

Decoder Singing Voice Synthesis

Prompt-Singer: Controllable Singing-Voice-Synthesis with Natural Language Prompt

1 code implementation18 Mar 2024 Yongqi Wang, Ruofan Hu, Rongjie Huang, Zhiqing Hong, RuiQi Li, Wenrui Liu, Fuming You, Tao Jin, Zhou Zhao

Recent singing-voice-synthesis (SVS) methods have achieved remarkable audio quality and naturalness, yet they lack the capability to control the style attributes of the synthesized singing explicitly.

Attribute Decoder +1

Where have you been? A Study of Privacy Risk for Point-of-Interest Recommendation

1 code implementation28 Oct 2023 Kunlin Cai, Jinghuai Zhang, Zhiqing Hong, Will Shand, Guang Wang, Desheng Zhang, Jianfeng Chi, Yuan Tian

To better understand and quantify the privacy leakage in mobility data-based ML models, we design a privacy attack suite containing data extraction and membership inference attacks tailored for point-of-interest (POI) recommendation models, one of the most widely used mobility data-based ML models.

Speech-to-Speech Translation with Discrete-Unit-Based Style Transfer

no code implementations14 Sep 2023 Yongqi Wang, Jionghao Bai, Rongjie Huang, RuiQi Li, Zhiqing Hong, Zhou Zhao

The acoustic language model we introduce for style transfer leverages self-supervised in-context learning, acquiring style transfer ability without relying on any speaker-parallel data, thereby overcoming data scarcity.

In-Context Learning Language Modeling +4

AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head

1 code implementation25 Apr 2023 Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, Yi Ren, Zhou Zhao, Shinji Watanabe

In this work, we propose a multi-modal AI system named AudioGPT, which complements LLMs (i. e., ChatGPT) with 1) foundation models to process complex audio information and solve numerous understanding and generation tasks; and 2) the input/output interface (ASR, TTS) to support spoken dialogue.

Cannot find the paper you are looking for? You can Submit a new open access paper.