1 code implementation • 5 Jul 2019 • Jeong Choi, Jongpil Lee, Jiyoung Park, Juhan Nam
Audio-based music classification and tagging is typically based on categorical supervised learning with a fixed set of labels.
no code implementations • 27 Jun 2019 • Jongpil Lee, Jiyoung Park, Juhan Nam
Supervised music representation learning has been performed mainly using semantic labels such as music genres.
no code implementations • 20 Jun 2019 • Jeong Choi, Jongpil Lee, Jiyoung Park, Juhan Nam
Music classification and tagging is conducted through categorical supervised learning with a fixed set of labels.
1 code implementation • 18 Jul 2018 • Jongpil Lee, Kyungyun Lee, Jiyoung Park, Jang-Yeon Park, Juhan Nam
Recently deep learning based recommendation systems have been actively explored to solve the cold-start problem using a hybrid approach.
no code implementations • 4 Dec 2017 • Jongpil Lee, Taejun Kim, Jiyoung Park, Juhan Nam
Music, speech, and acoustic scene sound are often handled separately in the audio domain because of their different signal characteristics.
2 code implementations • 18 Oct 2017 • Jiyoung Park, Jongpil Lee, Jangyeon Park, Jung-Woo Ha, Juhan Nam
In this paper, we present a supervised feature learning approach using artist labels annotated in every single track as objective meta data.
Sound Audio and Speech Processing
3 code implementations • 6 Mar 2017 • Jongpil Lee, Jiyoung Park, Keunhyoung Luke Kim, Juhan Nam
Recently, the end-to-end approach that learns hierarchical representations from raw data using deep convolutional neural networks has been successfully explored in the image, text and speech domains.