Search Results for author: Tomohiko Nakamura

Found 9 papers, 5 papers with code

Sampling-Frequency-Independent Universal Sound Separation

no code implementations22 Sep 2023 Tomohiko Nakamura, Kohei Yatabe

The USS aims at separating arbitrary sources of different types and can be the key technique to realize a source separator that can be universally used as a preprocessor for any downstream tasks.

How Generative Spoken Language Modeling Encodes Noisy Speech: Investigation from Phonetics to Syntactics

no code implementations1 Jun 2023 Joonyong Park, Shinnosuke Takamichi, Tomohiko Nakamura, Kentaro Seki, Detai Xin, Hiroshi Saruwatari

We examine the speech modeling potential of generative spoken language modeling (GSLM), which involves using learned symbols derived from data rather than phonemes for speech analysis and synthesis.

Language Modelling Resynthesis

JaCappella Corpus: A Japanese a Cappella Vocal Ensemble Corpus

1 code implementation29 Nov 2022 Tomohiko Nakamura, Shinnosuke Takamichi, Naoko Tanji, Satoru Fukayama, Hiroshi Saruwatari

These songs were arranged from out-of-copyright Japanese children's songs and have six voice parts (lead vocal, soprano, alto, tenor, bass, and vocal percussion).

Vocal ensemble separation

Hyperbolic Timbre Embedding for Musical Instrument Sound Synthesis Based on Variational Autoencoders

no code implementations27 Sep 2022 Futa Nakashima, Tomohiko Nakamura, Norihiro Takamune, Satoru Fukayama, Hiroshi Saruwatari

In this paper, we propose a musical instrument sound synthesis (MISS) method based on a variational autoencoder (VAE) that has a hierarchy-inducing latent space for timbre.

Differentiable Digital Signal Processing Mixture Model for Synthesis Parameter Extraction from Mixture of Harmonic Sounds

no code implementations1 Feb 2022 Masaya Kawamura, Tomohiko Nakamura, Daichi Kitamura, Hiroshi Saruwatari, Yu Takahashi, Kazunobu Kondo

A differentiable digital signal processing (DDSP) autoencoder is a musical sound synthesizer that combines a deep neural network (DNN) and spectral modeling synthesis.

Audio Source Separation

Sampling-Frequency-Independent Audio Source Separation Using Convolution Layer Based on Impulse Invariant Method

1 code implementation10 May 2021 Koichi Saito, Tomohiko Nakamura, Kohei Yatabe, Yuma Koizumi, Hiroshi Saruwatari

Audio source separation is often used as preprocessing of various applications, and one of its ultimate goals is to construct a single versatile model capable of dealing with the varieties of audio signals.

Audio Source Separation Music Source Separation

Time-Domain Audio Source Separation Based on Wave-U-Net Combined with Discrete Wavelet Transform

1 code implementation28 Jan 2020 Tomohiko Nakamura, Hiroshi Saruwatari

With this belief, focusing on the fact that the DWT has an anti-aliasing filter and the perfect reconstruction property, we design the proposed layers.

Audio Source Separation Music Source Separation

Real-Time Audio-to-Score Alignment of Music Performances Containing Errors and Arbitrary Repeats and Skips

1 code implementation24 Dec 2015 Tomohiko Nakamura, Eita Nakamura, Shigeki Sagayama

We confirmed real-time operation of the algorithms with music scores of practical length (around 10000 notes) on a modern laptop and their tracking ability to the input performance within 0. 7 s on average after repeats/skips in clarinet performance data.

Outer-Product Hidden Markov Model and Polyphonic MIDI Score Following

1 code implementation8 Apr 2014 Eita Nakamura, Tomohiko Nakamura, Yasuyuki Saito, Nobutaka Ono, Shigeki Sagayama

We present a polyphonic MIDI score-following algorithm capable of following performances with arbitrary repeats and skips, based on a probabilistic model of musical performances.

Cannot find the paper you are looking for? You can Submit a new open access paper.