no code implementations • 2 Aug 2024 • Jiwoo Ryu, Hao-Wen Dong, Jongmin Jung, Dasaem Jeong
The NMT consists of two transformers: the main decoder that models a sequence of compound tokens and the sub-decoder for modeling sub-tokens of each compound token.
no code implementations • 2 Aug 2024 • Danbinaerin Han, Mark Gotham, Dongmin Kim, Hannah Park, SiHun Lee, Dasaem Jeong
The resulting machine-transformed version of Chihwapyeong and Chwipunghyeong were evaluated by experts and performed by the Court Music Orchestra of National Gugak Center.
no code implementations • 10 Apr 2024 • Taegyun Kwon, Dasaem Jeong, Juhan Nam
To this end, we propose novel architectures for convolutional recurrent neural networks, redesigning an existing autoregressive piano transcription model.
2 code implementations • 20 Sep 2023 • Haven Kim, Jongmin Jung, Dasaem Jeong, Juhan Nam
To broaden the scope of genres and languages in lyric translation studies, we introduce a novel singable lyric translation dataset, approximately 89\% of which consists of K-pop song lyrics.
1 code implementation • 4 Aug 2023 • Danbinaerin Han, Rafael Caro Repetto, Dasaem Jeong
In this paper, we introduce a computational analysis of the field recording dataset of approximately 700 hours of Korean folk songs, which were recorded around 1980-90s.
1 code implementation • 9 Feb 2021 • Dasaem Jeong, Seungheon Doh, Taegyun Kwon
The goal of this paper to generate a visually appealing video that responds to music with a neural network so that each frame of the video reflects the musical characteristics of the corresponding audio clip.
Ranked #1 on Music Auto-Tagging on TimeTravel (using extra training data)
no code implementations • 2 Oct 2020 • Taegyun Kwon, Dasaem Jeong, Juhan Nam
Recent advances in polyphonic piano transcription have been made primarily by a deliberate design of neural network architectures that detect different note states such as onset or sustain and model the temporal evolution of the states.
1 code implementation • ISMIR 2019 • Dasaem Jeong, Taegyun Kwon, Yoojin Kim, Kyogu Lee, Juhan Nam
In this paper, we present our application of deep neural network to modeling piano performance, which imitates the expressive control of tempo, dynamics, articulations and pedaling from pianists.
1 code implementation • ICML 2019 • Dasaem Jeong, Taegyun Kwon, Yoojin Kim, Juhan Nam
Music score is often handled as one-dimensional sequential data.