Search Results for author: Dasaem Jeong

Found 9 papers, 5 papers with code

Nested Music Transformer: Sequentially Decoding Compound Tokens in Symbolic Music and Audio Generation

no code implementations2 Aug 2024 Jiwoo Ryu, Hao-Wen Dong, Jongmin Jung, Dasaem Jeong

The NMT consists of two transformers: the main decoder that models a sequence of compound tokens and the sub-decoder for modeling sub-tokens of each compound token.

Attribute Audio Generation +2

Six Dragons Fly Again: Reviving 15th-Century Korean Court Music with Transformers and Novel Encoding

no code implementations2 Aug 2024 Danbinaerin Han, Mark Gotham, Dongmin Kim, Hannah Park, SiHun Lee, Dasaem Jeong

The resulting machine-transformed version of Chihwapyeong and Chwipunghyeong were evaluated by experts and performed by the Court Music Orchestra of National Gugak Center.

Decoder Language Modelling

Towards Efficient and Real-Time Piano Transcription Using Neural Autoregressive Models

no code implementations10 Apr 2024 Taegyun Kwon, Dasaem Jeong, Juhan Nam

To this end, we propose novel architectures for convolutional recurrent neural networks, redesigning an existing autoregressive piano transcription model.

K-pop Lyric Translation: Dataset, Analysis, and Neural-Modelling

2 code implementations20 Sep 2023 Haven Kim, Jongmin Jung, Dasaem Jeong, Juhan Nam

To broaden the scope of genres and languages in lyric translation studies, we introduce a novel singable lyric translation dataset, approximately 89\% of which consists of K-pop song lyrics.

Translation

Finding Tori: Self-supervised Learning for Analyzing Korean Folk Song

1 code implementation4 Aug 2023 Danbinaerin Han, Rafael Caro Repetto, Dasaem Jeong

In this paper, we introduce a computational analysis of the field recording dataset of approximately 700 hours of Korean folk songs, which were recorded around 1980-90s.

Self-Supervised Learning

TräumerAI: Dreaming Music with StyleGAN

1 code implementation9 Feb 2021 Dasaem Jeong, Seungheon Doh, Taegyun Kwon

The goal of this paper to generate a visually appealing video that responds to music with a neural network so that each frame of the video reflects the musical characteristics of the corresponding audio clip.

 Ranked #1 on Music Auto-Tagging on TimeTravel (using extra training data)

Music Auto-Tagging

Polyphonic Piano Transcription Using Autoregressive Multi-State Note Model

no code implementations2 Oct 2020 Taegyun Kwon, Dasaem Jeong, Juhan Nam

Recent advances in polyphonic piano transcription have been made primarily by a deliberate design of neural network architectures that detect different note states such as onset or sustain and model the temporal evolution of the states.

VirtuosoNet: A Hierarchical RNN-based System for Modeling Expressive Piano Performance

1 code implementation ISMIR 2019 Dasaem Jeong, Taegyun Kwon, Yoojin Kim, Kyogu Lee, Juhan Nam

In this paper, we present our application of deep neural network to modeling piano performance, which imitates the expressive control of tempo, dynamics, articulations and pedaling from pianists.

Music Performance Rendering

Cannot find the paper you are looking for? You can Submit a new open access paper.