4 code implementations • 23 Sep 2017 • Yuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari
In the proposed framework incorporating the GANs, the discriminator is trained to distinguish natural and generated speech parameters, while the acoustic models are trained to minimize the weighted sum of the conventional minimum generation loss and an adversarial loss for deceiving the discriminator.
1 code implementation • 30 Jan 2023 • Takaaki Saeki, Soumi Maiti, Xinjian Li, Shinji Watanabe, Shinnosuke Takamichi, Hiroshi Saruwatari
While neural text-to-speech (TTS) has achieved human-like natural synthetic speech, multilingual TTS systems are limited to resource-rich languages due to the need for paired text and studio-quality audio data.
1 code implementation • 28 Jan 2020 • Tomohiko Nakamura, Hiroshi Saruwatari
With this belief, focusing on the fact that the DWT has an anti-aliasing filter and the perfect reconstruction property, we design the proposed layers.
1 code implementation • 29 Nov 2022 • Tomohiko Nakamura, Shinnosuke Takamichi, Naoko Tanji, Satoru Fukayama, Hiroshi Saruwatari
These songs were arranged from out-of-copyright Japanese children's songs and have six voice parts (lead vocal, soprano, alto, tenor, bass, and vocal percussion).
Ranked #1 on Vocal ensemble separation on jaCappella
1 code implementation • 14 Oct 2022 • Yuta Matsunaga, Takaaki Saeki, Shinnosuke Takamichi, Hiroshi Saruwatari
We present a comprehensive empirical study for personalized spontaneous speech synthesis on the basis of linguistic knowledge.
2 code implementations • 10 Jul 2018 • Shinnosuke Takamichi, Yuki Saito, Norihiro Takamune, Daichi Kitamura, Hiroshi Saruwatari
This paper presents a deep neural network (DNN)-based phase reconstruction from amplitude spectrograms.
Sound Audio and Speech Processing
1 code implementation • 10 May 2021 • Koichi Saito, Tomohiko Nakamura, Kohei Yatabe, Yuma Koizumi, Hiroshi Saruwatari
Audio source separation is often used as preprocessing of various applications, and one of its ultimate goals is to construct a single versatile model capable of dealing with the varieties of audio signals.
1 code implementation • 28 Oct 2017 • Ryosuke Sonobe, Shinnosuke Takamichi, Hiroshi Saruwatari
Thanks to improvements in machine learning techniques including deep learning, a free large-scale speech corpus that can be shared between academic institutions and commercial companies has an important role.
no code implementations • 10 Apr 2017 • Hiroyuki Miyoshi, Yuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari
Conventional VC using shared context posterior probabilities predicts target speech parameters from the context posterior probabilities estimated from the source speech parameters.
no code implementations • 12 Apr 2017 • Shinnosuke Takamichi, Tomoki Koriyama, Hiroshi Saruwatari
To give synthetic speech natural inter-utterance variation, this paper builds DNN acoustic models that make it possible to randomly sample speech parameters.
no code implementations • 9 Feb 2019 • Hiroki Tamaru, Yuki Saito, Shinnosuke Takamichi, Tomoki Koriyama, Hiroshi Saruwatari
To address this problem, we use a GMMN to model the variation of the modulation spectrum of the pitch contour of natural singing voices and add a randomized inter-utterance variation to the pitch contour generated by conventional DNN-based singing voice synthesis.
no code implementations • 19 Jul 2019 • Yuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari
Although conventional DNN-based speaker embedding such as a $d$-vector can be applied to multi-speaker modeling in speech synthesis, it does not correlate with the subjective inter-speaker similarity and is not necessarily appropriate speaker representation for open speakers whose speech utterances are not included in the training data.
no code implementations • 5 Aug 2019 • Taiki Nakamura, Yuki Saito, Shinnosuke Takamichi, Yusuke Ijima, Hiroshi Saruwatari
The experimental evaluation compares converted voices between the proposed method that does not use the targeted speaker's voice data and the standard VC that uses the data.
Automatic Speech Recognition Automatic Speech Recognition (ASR) +3
no code implementations • 25 Sep 2019 • Kazuki Fujii, Yuki Saito, Shinnosuke Takamichi, Yukino Baba, Hiroshi Saruwatari
To model the human-acceptable distribution, we formulate a backpropagation-based generator training algorithm by regarding human perception as a black-boxed discriminator.
no code implementations • 22 Apr 2020 • Tomoki Koriyama, Hiroshi Saruwatari
This paper presents a deep Gaussian process (DGP) model with a recurrent architecture for speech sequence modeling.
no code implementations • LREC 2020 • Yuki Yamashita, Tomoki Koriyama, Yuki Saito, Shinnosuke Takamichi, Yusuke Ijima, Ryo Masumura, Hiroshi Saruwatari
In this paper, we investigate the effectiveness of using rich annotations in deep neural network (DNN)-based statistical speech synthesis.
no code implementations • LREC 2020 • Yuki Saito, Shinnosuke Takamichi, Hiroshi Saruwatari
Developing a spontaneous speech corpus would be beneficial for spoken language processing and understanding.
no code implementations • 7 Aug 2020 • Kentaro Mitsui, Tomoki Koriyama, Hiroshi Saruwatari
We propose a framework for multi-speaker speech synthesis using deep Gaussian processes (DGPs); a DGP is a deep architecture of Bayesian kernel regressions and thus robust to overfitting.
no code implementations • 8 Feb 2021 • Yota Ueda, Kazuki Fujii, Yuki Saito, Shinnosuke Takamichi, Yukino Baba, Hiroshi Saruwatari
A DNN-based generator is trained using a human-based discriminator, i. e., humans' perceptual evaluations, instead of the GAN's DNN-based discriminator.
no code implementations • 15 Sep 2021 • Naoto Iijima, Shoichi Koyama, Hiroshi Saruwatari
To reproduce binaural signals from microphone array recordings at a remote location, a spherical microphone array is generally used for capturing a soundfield.
no code implementations • 22 Sep 2021 • Takaaki Saeki, Shinnosuke Takamichi, Hiroshi Saruwatari
Although this method achieves comparable speech quality to that of a method that waits for the future context, it entails a huge amount of processing for sampling from the language model at each time step.
no code implementations • 1 Feb 2022 • Masaya Kawamura, Tomohiko Nakamura, Daichi Kitamura, Hiroshi Saruwatari, Yu Takahashi, Kazunobu Kondo
A differentiable digital signal processing (DDSP) autoencoder is a musical sound synthesizer that combines a deep neural network (DNN) and spectral modeling synthesis.
no code implementations • 10 Feb 2022 • Kazuyuki Arikawa, Shoichi Koyama, Hiroshi Saruwatari
A spatial active noise control (ANC) method based on the individual kernel interpolation of primary and secondary sound fields is proposed.
no code implementations • 28 Mar 2022 • Yuki Saito, Yuto Nishimura, Shinnosuke Takamichi, Kentaro Tachibana, Hiroshi Saruwatari
We describe our methodology to construct an empathetic dialogue speech corpus and report the analysis results of the STUDIES corpus.
no code implementations • 5 May 2022 • Juliano G. C. Ribeiro, Shoichi Koyama, Hiroshi Saruwatari
A method of interpolating the acoustic transfer function (ATF) between regions that takes into account both the physical properties of the ATF and the directionality of region configurations is proposed.
no code implementations • 16 Jun 2022 • Yuto Nishimura, Yuki Saito, Shinnosuke Takamichi, Kentaro Tachibana, Hiroshi Saruwatari
To train the empathetic DSS model effectively, we investigate 1) a self-supervised learning model pretrained with large speech corpora, 2) a style-guided training using a prosody embedding of the current utterance to be predicted by the dialogue context embedding, 3) a cross-modal attention to combine text and speech modalities, and 4) a sentence-wise embedding to achieve fine-grained prosody modeling rather than utterance-wise modeling.
no code implementations • 21 Jun 2022 • Kenta Udagawa, Yuki Saito, Hiroshi Saruwatari
With a conventional speaker-adaptation method, a target speaker's embedding vector is extracted from his/her reference speech using a speaker encoder trained on a speaker-discriminative task.
no code implementations • 26 Sep 2022 • Yusuke Nakai, Yuki Saito, Kenta Udagawa, Hiroshi Saruwatari
A conventional generative adversarial network (GAN)-based training algorithm significantly improves the quality of synthetic speech by reducing the statistical difference between natural and synthetic speech.
no code implementations • 27 Sep 2022 • Futa Nakashima, Tomohiko Nakamura, Norihiro Takamune, Satoru Fukayama, Hiroshi Saruwatari
In this paper, we propose a musical instrument sound synthesis (MISS) method based on a variational autoencoder (VAE) that has a hierarchy-inducing latent space for timbre.
no code implementations • 27 Feb 2023 • Dong Yang, Tomoki Koriyama, Yuki Saito, Takaaki Saeki, Detai Xin, Hiroshi Saruwatari
We also leverage duration-aware pause insertion for more natural multi-speaker TTS.
no code implementations • 7 Mar 2023 • Juliano G. C. Ribeiro, Shoichi Koyama, Hiroshi Saruwatari
An interpolation method for region-to-region acoustic transfer functions (ATFs) based on kernel ridge regression with an adaptive kernel is proposed.
no code implementations • 28 Mar 2023 • Kazuyuki Arikawa, Shoichi Koyama, Hiroshi Saruwatari
A spatial active noise control (ANC) method based on the interpolation of a sound field from reference microphone signals is proposed.
no code implementations • 29 Mar 2023 • Kazuyuki Arikawa, Shoichi Koyama, Hiroshi Saruwatari
A spatial active noise control (ANC) method based on kernel interpolation of a sound field with exterior radiation suppression is proposed.
no code implementations • 23 May 2023 • Yuki Saito, Shinnosuke Takamichi, Eiji Iimori, Kentaro Tachibana, Hiroshi Saruwatari
We focus on ChatGPT's reading comprehension and introduce it to EDSS, a task of synthesizing speech that can empathize with the interlocutor's emotion.
no code implementations • 23 May 2023 • Yuki Saito, Eiji Iimori, Shinnosuke Takamichi, Kentaro Tachibana, Hiroshi Saruwatari
We present CALLS, a Japanese speech corpus that considers phone calls in a customer center as a new domain of empathetic spoken dialogue.
no code implementations • 1 Jun 2023 • Joonyong Park, Shinnosuke Takamichi, Tomohiko Nakamura, Kentaro Seki, Detai Xin, Hiroshi Saruwatari
We examine the speech modeling potential of generative spoken language modeling (GSLM), which involves using learned symbols derived from data rather than phonemes for speech analysis and synthesis.
no code implementations • 15 Jun 2023 • Takaaki Kojima, Kazuyuki Arikawa, Shoichi Koyama, Hiroshi Saruwatari
A multichannel active noise control (ANC) method with exterior radiation suppression is proposed.
no code implementations • 26 Jul 2023 • Keisuke Kimura, Shoichi Koyama, Hiroshi Saruwatari
A sound field synthesis method enhancing perceptual quality is proposed.
no code implementations • 18 Sep 2023 • Shinnosuke Takamichi, Hiroki Maeda, Joonyong Park, Daisuke Saito, Hiroshi Saruwatari
In this study, we investigate whether speech symbols, learned through deep learning, follow Zipf's law, akin to natural language symbols.
no code implementations • 11 Jan 2024 • Yoshihide Tomita, Shoichi Koyama, Hiroshi Saruwatari
A method for synthesizing the desired sound field while suppressing the exterior radiation power with directional weighting is proposed.
no code implementations • 4 Apr 2024 • Detai Xin, Xu Tan, Kai Shen, Zeqian Ju, Dongchao Yang, Yuancheng Wang, Shinnosuke Takamichi, Hiroshi Saruwatari, Shujie Liu, Jinyu Li, Sheng Zhao
Furthermore, we demonstrate that RALL-E correctly synthesizes sentences that are hard for VALL-E and reduces the error rate from $68\%$ to $4\%$.