Search Results for author: Kei Hashimoto

Found 10 papers, 2 papers with code

Sinsy: A Deep Neural Network-Based Singing Voice Synthesis System

1 code implementation5 Aug 2021 Yukiya Hono, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda

To better model a singing voice, the proposed system incorporates improved approaches to modeling pitch and vibrato and better training criteria into the acoustic model.

Singing Voice Synthesis

Embedding a Differentiable Mel-cepstral Synthesis Filter to a Neural Speech Synthesis System

1 code implementation21 Nov 2022 Takenori Yoshimura, Shinji Takaki, Kazuhiro Nakamura, Keiichiro Oura, Yukiya Hono, Kei Hashimoto, Yoshihiko Nankaku, Keiichi Tokuda

This paper integrates a classic mel-cepstral synthesis filter into a modern neural speech synthesis system towards end-to-end controllable speech synthesis.

Speech Synthesis

Singing voice synthesis based on convolutional neural networks

no code implementations15 Apr 2019 Kazuhiro Nakamura, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda

Then, an acoustic feature sequence of an arbitrary musical score is output in units of frames by the trained DNNs, and a natural trajectory of a singing voice is obtained by using a parameter generation algorithm.

Singing Voice Synthesis

Fast and High-Quality Singing Voice Synthesis System based on Convolutional Neural Networks

no code implementations24 Oct 2019 Kazuhiro Nakamura, Shinji Takaki, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda

Singing voice synthesis systems based on deep neural networks (DNNs) are currently being proposed and are improving the naturalness of synthesized singing voices.

Singing Voice Synthesis

Hierarchical Multi-Grained Generative Model for Expressive Speech Synthesis

no code implementations17 Sep 2020 Yukiya Hono, Kazuna Tsuboi, Kei Sawada, Kei Hashimoto, Keiichiro Oura, Yoshihiko Nankaku, Keiichi Tokuda

This framework consists of a multi-grained variational autoencoder, a conditional prior, and a multi-level auto-regressive latent converter to obtain the different time-resolution latent variables and sample the finer-level latent variables from the coarser-level ones by taking into account the input text.

Expressive Speech Synthesis Text-To-Speech Synthesis

Neural Sequence-to-Sequence Speech Synthesis Using a Hidden Semi-Markov Model Based Structured Attention Mechanism

no code implementations31 Aug 2021 Yoshihiko Nankaku, Kenta Sumiya, Takenori Yoshimura, Shinji Takaki, Kei Hashimoto, Keiichiro Oura, Keiichi Tokuda

This paper proposes a novel Sequence-to-Sequence (Seq2Seq) model integrating the structure of Hidden Semi-Markov Models (HSMMs) into its attention mechanism.

Speech Synthesis

Singing Voice Synthesis Based on a Musical Note Position-Aware Attention Mechanism

no code implementations28 Dec 2022 Yukiya Hono, Kei Hashimoto, Yoshihiko Nankaku, Keiichi Tokuda

This paper proposes a novel sequence-to-sequence (seq2seq) model with a musical note position-aware attention mechanism for singing voice synthesis (SVS).

Position Singing Voice Synthesis

PeriodGrad: Towards Pitch-Controllable Neural Vocoder Based on a Diffusion Probabilistic Model

no code implementations22 Feb 2024 Yukiya Hono, Kei Hashimoto, Yoshihiko Nankaku, Keiichi Tokuda

This paper presents a neural vocoder based on a denoising diffusion probabilistic model (DDPM) incorporating explicit periodic signals as auxiliary conditioning signals.

Denoising Pitch control +1

Cannot find the paper you are looking for? You can Submit a new open access paper.