Search Results for author: Kin Wai Cheuk

Found 9 papers, 4 papers with code

Timbre-Trap: A Low-Resource Framework for Instrument-Agnostic Music Transcription

no code implementations27 Sep 2023 Frank Cwitkowitz, Kin Wai Cheuk, Woosung Choi, Marco A. Martínez-Ramírez, Keisuke Toyama, Wei-Hsiang Liao, Yuki Mitsufuji

Several works have explored multi-instrument transcription as a means to bolster the performance of models on low-resource tasks, but these methods face the same data availability issues.

Music Transcription

Jointist: Simultaneous Improvement of Multi-instrument Transcription and Music Source Separation via Joint Training

no code implementations1 Feb 2023 Kin Wai Cheuk, Keunwoo Choi, Qiuqiang Kong, Bochen Li, Minz Won, Ju-Chiang Wang, Yun-Ning Hung, Dorien Herremans

Jointist consists of an instrument recognition module that conditions the other two modules: a transcription module that outputs instrument-specific piano rolls, and a source separation module that utilizes instrument information and transcription results.

Chord Recognition Instrument Recognition +1

The Effect of Spectrogram Reconstruction on Automatic Music Transcription: An Alternative Approach to Improve Transcription Accuracy

2 code implementations20 Oct 2020 Kin Wai Cheuk, Yin-Jyun Luo, Emmanouil Benetos, Dorien Herremans

We attempt to use only the pitch labels (together with spectrogram reconstruction loss) and explore how far this model can go without introducing supervised sub-tasks.

Music Transcription

The impact of Audio input representations on neural network based music transcription

1 code implementation25 Jan 2020 Kin Wai Cheuk, Kat Agres, Dorien Herremans

This paper thoroughly analyses the effect of different input representations on polyphonic multi-instrument music transcription.

Sound Audio and Speech Processing

Latent space representation for multi-target speaker detection and identification with a sparse dataset using Triplet neural networks

1 code implementation1 Oct 2019 Kin Wai Cheuk, Balamurali B. T., Gemma Roig, Dorien Herremans

When reducing the training data to only using the train set, our method results in 309 confusions for the Multi-target speaker identification task, which is 46% better than the baseline model.

Speaker Identification Speaker Recognition

Cannot find the paper you are looking for? You can Submit a new open access paper.