1 code implementation • 16 Jun 2021 • Ching-Yu Chiu, Joann Ching, Wen-Yi Hsiao, Yu-Hua Chen, Alvin Wen-Yu Su, Yi-Hsuan Yang
Due to advances in deep learning, the performance of automatic beat and downbeat tracking in musical audio signals has seen great improvement in recent years.
4 code implementations • 7 Jan 2021 • Wen-Yi Hsiao, Jen-Yu Liu, Yin-Cheng Yeh, Yi-Hsuan Yang
In this paper, we present a conceptually different approach that explicitly takes into account the type of the tokens, such as note types and metric types.
1 code implementation • 6 Aug 2020 • Ching-Yu Chiu, Wen-Yi Hsiao, Yin-Cheng Yeh, Yi-Hsuan Yang, Alvin Wen-Yu Su
Blind music source separation has been a popular and active subject of research in both the music information retrieval and signal processing communities.
no code implementations • 4 Aug 2020 • Yu-Hua Chen, Yu-Hsiang Huang, Wen-Yi Hsiao, Yi-Hsuan Yang
Deep learning algorithms are increasingly developed for learning to compose music in the form of MIDI files.
Sound Audio and Speech Processing
no code implementations • 8 Jan 2020 • Yin-Cheng Yeh, Wen-Yi Hsiao, Satoru Fukayama, Tetsuro Kitahara, Benjamin Genchel, Hao-Min Liu, Hao-Wen Dong, Yi-An Chen, Terence Leong, Yi-Hsuan Yang
Several prior works have proposed various methods for the task of automatic melody harmonization, in which a model aims to generate a sequence of chords to serve as the harmonic accompaniment of a given multiple-bar melody sequence.
8 code implementations • 19 Sep 2017 • Hao-Wen Dong, Wen-Yi Hsiao, Li-Chia Yang, Yi-Hsuan Yang
The three models, which differ in the underlying assumptions and accordingly the network architectures, are referred to as the jamming model, the composer model and the hybrid model.