1 code implementation • 20 Aug 2023 • Ching-Yu Chiu, Meinard Müller, Matthew E. P. Davies, Alvin Wen-Yu Su, Yi-Hsuan Yang
To model the periodicity of beats, state-of-the-art beat tracking systems use "post-processing trackers" (PPTs) that rely on several empirically determined global assumptions for tempo transition, which work well for music with a steady tempo.
1 code implementation • 13 Oct 2022 • Ching-Yu Chiu, Meinard Müller, Matthew E. P. Davies, Alvin Wen-Yu Su, Yi-Hsuan Yang
For expressive music, the tempo may change over time, posing challenges to tracking the beats by an automatic model.
1 code implementation • 12 Oct 2022 • Yueh-Kao Wu, Ching-Yu Chiu, Yi-Hsuan Yang
Instead of generating the drum track directly as waveforms, we use a separate VQ-VAE to encode the mel-spectrogram of a drum track into another set of discrete codes, and train the Transformer to predict the sequence of drum-related discrete codes.
1 code implementation • 16 Jun 2021 • Ching-Yu Chiu, Joann Ching, Wen-Yi Hsiao, Yu-Hua Chen, Alvin Wen-Yu Su, Yi-Hsuan Yang
Due to advances in deep learning, the performance of automatic beat and downbeat tracking in musical audio signals has seen great improvement in recent years.
1 code implementation • 16 Jun 2021 • Ching-Yu Chiu, Alvin Wen-Yu Su, Yi-Hsuan Yang
This paper presents a novel system architecture that integrates blind source separation with joint beat and downbeat tracking in musical audio signals.
1 code implementation • 6 Aug 2020 • Ching-Yu Chiu, Wen-Yi Hsiao, Yin-Cheng Yeh, Yi-Hsuan Yang, Alvin Wen-Yu Su
Blind music source separation has been a popular and active subject of research in both the music information retrieval and signal processing communities.