Search Results for author: Hideyuki Tachibana

Found 6 papers, 2 papers with code

Efficiently Trainable Text-to-Speech System Based on Deep Convolutional Networks with Guided Attention

22 code implementations24 Oct 2017 Hideyuki Tachibana, Katsuya Uenoyama, Shunsuke Aihara

This paper describes a novel text-to-speech (TTS) technique based on deep convolutional neural networks (CNN), without use of any recurrent units.

Text-To-Speech Synthesis

Accent Estimation of Japanese Words from Their Surfaces and Romanizations for Building Large Vocabulary Accent Dictionaries

1 code implementation21 Sep 2020 Hideyuki Tachibana, Yotaro Katayama

The authors applied this technique to an existing large vocabulary Japanese dictionary NEologd, and obtained a large vocabulary Japanese accent dictionary.

Sentence

Towards Listening to 10 People Simultaneously: An Efficient Permutation Invariant Training of Audio Source Separation Using Sinkhorn's Algorithm

no code implementations22 Oct 2020 Hideyuki Tachibana

In neural network-based monaural speech separation techniques, it has been recently common to evaluate the loss using the permutation invariant training (PIT) loss.

Audio Source Separation Speech Separation

Quasi-Taylor Samplers for Diffusion Generative Models based on Ideal Derivatives

no code implementations26 Dec 2021 Hideyuki Tachibana, Mocho Go, Muneyoshi Inahara, Yotaro Katayama, Yotaro Watanabe

Diffusion generative models have emerged as a new challenger to popular deep neural generative models such as GANs, but have the drawback that they often require a huge number of neural function evaluations (NFEs) during synthesis unless some sophisticated sampling strategies are employed.

Denoising Image Generation +1

gSwin: Gated MLP Vision Model with Hierarchical Structure of Shifted Window

no code implementations24 Aug 2022 Mocho Go, Hideyuki Tachibana

Following the success in language domain, the self-attention mechanism (transformer) is adopted in the vision domain and achieving great success recently.

Image Classification Instance Segmentation +3

Multilingual Sentence-T5: Scalable Sentence Encoders for Multilingual Applications

no code implementations26 Mar 2024 Chihiro Yano, Akihiko Fukuchi, Shoko Fukasawa, Hideyuki Tachibana, Yotaro Watanabe

Prior work on multilingual sentence embedding has demonstrated that the efficient use of natural language inference (NLI) data to build high-performance models can outperform conventional methods.

Natural Language Inference Sentence +2

Cannot find the paper you are looking for? You can Submit a new open access paper.