The NeteaseGames System for Voice Conversion Challenge 2020 with Vector-quantization Variational Autoencoder and WaveNet

15 Oct 2020  ·  Haitong Zhang ·

This paper presents the description of our submitted system for Voice Conversion Challenge (VCC) 2020 with vector-quantization variational autoencoder (VQ-VAE) with WaveNet as the decoder, i.e., VQ-VAE-WaveNet. VQ-VAE-WaveNet is a nonparallel VAE-based voice conversion that reconstructs the acoustic features along with separating the linguistic information with speaker identity. The model is further improved with the WaveNet cycle as the decoder to generate the high-quality speech waveform, since WaveNet, as an autoregressive neural vocoder, has achieved the SoTA result of waveform generation. In practice, our system can be developed with VCC 2020 dataset for both Task 1 (intra-lingual) and Task 2 (cross-lingual). However, we only submit our system for the intra-lingual voice conversion task. The results of VCC 2020 demonstrate that our system VQ-VAE-WaveNet achieves: 3.04 mean opinion score (MOS) in naturalness and a 3.28 average score in similarity ( the speaker similarity percentage (Sim) of 75.99%) for Task 1. The subjective evaluations also reveal that our system gives top performance when no supervised learning is involved. What's more, our system performs well in some objective evaluations. Specifically, our system achieves an average score of 3.95 in naturalness in automatic naturalness prediction and ranked the 6th and 8th, respectively in ASV-based speaker similarity and spoofing countermeasures.

PDF Abstract
No code implementations yet. Submit your code now

Categories


Sound Audio and Speech Processing

Datasets