1 code implementation • 6 Nov 2023 • Minz Won, Yun-Ning Hung, Duc Le
This paper investigates foundation models tailored for music informatics, a domain currently challenged by the scarcity of labeled data and generalization issues.
no code implementations • 2 Oct 2023 • Yun-Ning Hung, Ju-Chiang Wang, Minz Won, Duc Le
To our knowledge, this is the first attempt to study the effects of scaling up both model and training data for a variety of MIR tasks.
no code implementations • 19 Jun 2023 • Wei-Tsung Lu, Ju-Chiang Wang, Yun-Ning Hung
Multitrack music transcription aims to transcribe a music audio input into the musical notes of multiple instruments simultaneously.
no code implementations • 1 Feb 2023 • Kin Wai Cheuk, Keunwoo Choi, Qiuqiang Kong, Bochen Li, Minz Won, Ju-Chiang Wang, Yun-Ning Hung, Dorien Herremans
Jointist consists of an instrument recognition module that conditions the other two modules: a transcription module that outputs instrument-specific piano rolls, and a source separation module that utilizes instrument information and transcription results.
1 code implementation • 2 Nov 2022 • Yun-Ning Hung, Chao-Han Huck Yang, Pin-Yu Chen, Alexander Lerch
In this work, we introduce a novel method for leveraging pre-trained models for low-resource (music) classification based on the concept of Neural Model Reprogramming (NMR).
no code implementations • 10 Jun 2022 • Yun-Ning Hung, Alexander Lerch
The workload is kept low during inference as the pre-trained features are only necessary for training.
no code implementations • 29 May 2022 • Ju-Chiang Wang, Yun-Ning Hung, Jordan B. L. Smith
Conventional music structure analysis algorithms aim to divide a song into segments and to group them with abstract labels (e. g., 'A', 'B', and 'C').
no code implementations • 17 Mar 2022 • Yun-Ning Hung, Alexander Lerch
The integration of additional side information to improve music source separation has been investigated numerous times, e. g., by adding features to the input or by adding learning targets in a multi-task learning scenario.
1 code implementation • 2 Nov 2021 • Yun-Ning Hung, Karn N. Watcharasupat, Chih-Wei Wu, Iroro Orife, Kelian Li, Pavan Seshadri, Junyoung Lee
We propose a dataset, AVASpeech-SMAD, to assist speech and music activity detection research.
no code implementations • 22 Oct 2020 • Yun-Ning Hung, Gordon Wichern, Jonathan Le Roux
Most music source separation systems require large collections of isolated sources for training, which can be difficult to obtain.
no code implementations • 3 Aug 2020 • Yun-Ning Hung, Alexander Lerch
Music source separation is a core task in music information retrieval which has seen a dramatic improvement in the past years.
1 code implementation • 1 Aug 2020 • Jiawen Huang, Yun-Ning Hung, Ashis Pati, Siddharth Kumar Gururani, Alexander Lerch
The assessment of music performances in most cases takes into account the underlying musical score being performed.
1 code implementation • 30 May 2019 • Yun-Ning Hung, I-Tung Chiang, Yi-An Chen, Yi-Hsuan Yang
We investigate disentanglement techniques such as adversarial training to separate latent factors that are related to the musical content (pitch) of different parts of the piece, and that are related to the instrumentation (timbre) of the parts per short-time segment.
Audio and Speech Processing Sound
2 code implementations • 30 Oct 2017 • Lang-Chi Yu, Yi-Hsuan Yang, Yun-Ning Hung, Yi-An Chen
A model for hit song prediction can be used in the pop music industry to identify emerging trends and potential artists or songs before they are marketed to the public.